id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2515769470
Core: Add params supported by tos client Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here. #71
gharchive/issue
2024-09-10T08:29:51
2025-04-01T04:36:15.152936
{ "authors": [ "yanghua" ], "repo": "volcengine/tosfs", "url": "https://github.com/volcengine/tosfs/issues/70", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1327557461
T008: HTTPError: 403 Client Error: Forbidden for url: http://ligand-expo.rcsb.org/reports/8/8AM/ Hi, In T008, while running codes: pdb_id = 5UG9 def get_ligands(pdb_id): info = pypdb.get_info(pdb_id) nonpolymers = info.get("rcsb_entry_info", {}).get("nonpolymer_bound_components", []) ligands = {} for ligand_expo_id in nonpolymers: r = requests.get( f"http://ligand-expo.rcsb.org/reports/{ligand_expo_id[0]}/{ligand_expo_id}/" ) r.raise_for_status() html = BeautifulSoup(r.text) info = {} for table in html.find_all("table"): for row in table.find_all("tr"): cells = row.find_all("td") if len(cells) != 2: continue key, value = cells if key.string and key.string.strip(): info[key.string.strip()] = "".join(value.find_all(text=True)) # Postprocess some known values info["Molecular weight"] = float(info["Molecular weight"].split()[0]) info["Formal charge"] = int(info["Formal charge"]) info["Atom count"] = int(info["Atom count"]) info["Chiral atom count"] = int(info["Chiral atom count"]) info["Bond count"] = int(info["Bond count"]) info["Aromatic bond count"] = int(info["Aromatic bond count"]) ligands[ligand_expo_id] = info return ligands get_ligands(pdb_id) I am getting following error: Traceback (most recent call last): File "Script.py", line 405, in <module> ligands = get_ligands(pdb_id) File "Script.py", line 342, in get_ligands r.raise_for_status() File "/home/mfkhan91/anaconda3/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://ligand-expo.rcsb.org/reports/8/8AM/ Please help me in resolving the issue. Thanks, Faraz Hi @khanmf, Thanks a lot for raising this issue; I noticed problems yesterday as well (the website was still ok on Tuesday). I had hoped that the issue is resolved today (sometimes waiting a day helped in the past with other webservices) but that does not seem to be the case. Search page: http://ligand-expo.rcsb.org/ld-search.html Entering e.g. STI in the first Search box returns now an “Oops - Search failed - “ Ligand page: In one of our Jupyter notebooks, we are accessing data directly from URLs like https://ligand-expo.rcsb.org/reports/S/STI/. This access to these URLs is now forbidden ("You don't have permission to access…”). I contacted the RCSB team and will update you here, once I have an answer. Until then, there is, unfortunately, nothing we can do for T008 - except for fetching the ligand metadata that you need from another resource, e.g. directly from the CIF file, if available. If you are in a hurry: I recently learned that you can access a PDB entry's ligand data from RCSB using GraphQL. Maybe this can help you while the Ligand Expo resolves the issue (though this will only fetch a subset of all the ligand metadata that the Ligand Expo website offers). https://github.com/volkamerlab/teachopencadd/issues/248#issuecomment-1201656346 Good news, @khanmf, the RCSB team has resolved this issue :tada: @dominiquesydow Great! Thanks for letting me know.
gharchive/issue
2022-08-03T17:15:56
2025-04-01T04:36:15.161689
{ "authors": [ "dominiquesydow", "khanmf" ], "repo": "volkamerlab/teachopencadd", "url": "https://github.com/volkamerlab/teachopencadd/issues/260", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
692590212
Layout problem in Halfwidth form fields Hi All, Looking for a CSS trick for the below. when I am making the form fields half-width, all the fields are not getting aligned properly as below. I want to achieve the layout as per the second image. I want to fill the layout with the remaining fields as below Could anybody help with some guidelines? Hi Volkanceylan, appreciate your comment on this. I just want to utilize (hence form looks good) right-hand side of the HtmlReportContentEditor and filled with remaining fields of the form. because when I am increasing the height of the HtmlReportContentEditor , then the right side of it remains a huge plain blank space after the one field on the right side. I guess you might have written some mixins to handle this. Kindly advise. Regards Prem Hi, Please check the following issues, it might help you. #5119 Hello @reach2rv, Thank you so much for your help and this leads me in the right direction. I will try this. Closed due inactivity Also check the issue guidelines. If you still need help, you can ask this on stackoverflow with our new tag. https://stackoverflow.com/tags/serenity-platform @VictorTomaili thanks. the solution by @reach2rv is working for me
gharchive/issue
2020-09-04T01:46:02
2025-04-01T04:36:15.167097
{ "authors": [ "VictorTomaili", "premsudheer", "reach2rv" ], "repo": "volkanceylan/Serenity", "url": "https://github.com/volkanceylan/Serenity/issues/5188", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1659613377
No module named 'raft' Hi I have a error when I try to run the script : C:\AUTOMATIC1111>py script.py Traceback (most recent call last): File "C:\AUTOMATIC1111\script.py", line 15, in from raft import RAFT ModuleNotFoundError: No module named 'raft' Apparently I need to install raft but how I add raft to automatic1111 ? thanks No. You don't need to install raft in automatic1111. It should hook up the 'RAFT' repository when you clone 'SD-CN-Animation'. If it didn't happen for some reason, go to the 'SD-CN-Animation' folder, open CLI and do 'git clone https://github.com/princeton-vl/RAFT.git' command. It should create 'RAFT' folder with all the necessary scripts and it should work fine afterwards. Thanks for raft it's works, but I have another problem when I run the script it's open a window that quickly close and I have just this message in the command : 0it [00:00, ?it/s] in the script I need to write the format like that ? : INPUT_VIDEO = "C:\AUTOMATIC1111\INPUTVIDEO\1.mp4" OUTPUT_VIDEO = "C:\AUTOMATIC1111\OUTPUTVIDEO\1.mp4" Yeah. The format seems fine to me. I suspect It cannot find video file for some reason. Try to move it in 'SD-CN-Animation' folder and set INPUT_VIDEO = "1.mp4". Also make sure that the video is working in any videoplayer. Ok it launched everything and after generate the first picture I have a error : OpenCV: FFMPEG: tag 0x5634504d/'MP4V' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v' 1%|▋ | 1/117 [00:08<16:53, 8.74s/it] Traceback (most recent call last): File "C:\AUTOMATIC1111\SD-CN-Animation\script.py", line 234, in _, alpha_mask, warped_styled = RAFT_estimate_flow_diff(prev_frame, frame, prev_frame_styled) File "C:\AUTOMATIC1111\SD-CN-Animation\script.py", line 100, in RAFT_estimate_flow_diff RAFT_model.load_state_dict(torch.load(args.model)) File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 699, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\Alex\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 211, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'RAFT/models/raft-things.pth' It's the video format not supported ? Nope, It cannot find the 'raft-things.pth' model. I forget to mention it in the instruction, you have to set up the RAFT repository as it described here: https://github.com/princeton-vl/RAFT Basically it just comes down to running "./download_models.sh" to download the models. I would suggest not to creating a special conda environment, but choose what is more convenient for you. it's works ! but the picture become blurry and dark after few minutes here the first pictures : and the last : Also it's generate the video but I have a error message display in the cmd of automatic1111 : Error running process: C:\AUTOMATIC1111\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "C:\AUTOMATIC1111\modules\scripts.py", line 417, in process script.process(p, *script_args) File "C:\AUTOMATIC1111\extensions\sd-webui-controlnet\scripts\controlnet.py", line 628, in process unit = self.parse_remote_call(p, unit, idx) Do you know why it's become more and more darker after each render here another example : Im really impress by the result your script is clearly awesome I can't wait to have fixed the darker images bug Cannot say anything regarded the web-ui error, but it might be related to the darkening as I never experienced it myself while working on the script. every render it's write : Error running process: C:\AUTOMATIC1111\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "C:\AUTOMATIC1111\modules\scripts.py", line 417, in process script.process(p, *script_args) File "C:\AUTOMATIC1111\extensions\sd-webui-controlnet\scripts\controlnet.py", line 628, in process unit = self.parse_remote_call(p, unit, idx) File "C:\AUTOMATIC1111\extensions\sd-webui-controlnet\scripts\controlnet.py", line 540, in parse_remote_call unit.enabled = selector(p, "control_net_enabled", unit.enabled, idx, strict=True) AttributeError: 'str' object has no attribute 'enabled' I found someone else have the same error message : https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9107 Did you made ControlNet accessible through API in the settings? my controlnet settings : Ok. It seems like a bug in web-ui. You can do the following: go to modules/api/api.py in web-ui and replace this string (it should be at 275 line) script_args[alwayson_script.args_from:alwayson_script.args_to] = request.alwayson_scripts[alwayson_script_name]["args"] with this: for idx in range(0, min((alwayson_script.args_to - alwayson_script.args_from), len(request.alwayson_scripts[alwayson_script_name]["args"]))): script_args[alwayson_script.args_from + idx] = request.alwayson_scripts[alwayson_script_name]["args"][idx] Please tell me if it's helped. Thanks it's fixed the webui error message ! but I have always the picture become more and more darker there's another message in the cmd of the script : Just a warning, you can ignore it. No idea why it is darkening though. It's very frustrating your script have a awesome temporal consistency I really want to use it Im forced to add "--no-half-vae" for generate the video maybe it's because of that ? it's seems to be a problem with the vae https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6858 It solved the problem I have downloaded the vae for the checkpoint I use, Thanks you for the helps ! There's a little bit of flicker but with the davinci resolve deflicker it's fix the problem :) I have a last question does it works with LoRA ? thanks
gharchive/issue
2023-04-08T18:54:51
2025-04-01T04:36:15.192808
{ "authors": [ "alexfredo", "volotat" ], "repo": "volotat/SD-CN-Animation", "url": "https://github.com/volotat/SD-CN-Animation/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2554872899
[Bug]: Attempt to read property "value" on null error What happened? hello, I installed the plugin and opened 2fa, then when I tried to log in, I got an error like this: Attempt to read property “value” on null In order to resolve this error vendor\vormkracht10\filament-2fa\src\Http\Livewire\Auth\LoginTwoFactor.php In line 44 $this->twoFactorType = $this->challengedUser->two_factor_type->value; It worked when I removed the value from the code, what is this caused by? can you check? Translated with DeepL.com (free version) How to reproduce the bug Bu hatayı çözmek içinde vendor\vormkracht10\filament-2fa\src\Http\Livewire\Auth\LoginTwoFactor.php 44. satırdaki $this->twoFactorType = $this->challengedUser->two_factor_type->value; kodundan valueyi kaldırınca çalıştı bu neyden kaynaklanıyor? kontrol edermisiniz? Package Version 1.6^ PHP Version 8.2.4 Laravel Version 11.0.0 Which operating systems does with happen with? Windows Notes No response Did you follow all the installation steps from the readme? @enessvg @Baspa Yes Can you provide me a reproduction repository? Then I'll be happy to help @enessvg I can't provide it right now, but I can leave photos like this: Deleted value code: $this->twoFactorType = $this->challengedUser->two_factor_type; @Baspa Did you run migrations? Can you show me your user model and did you setup 2FA for the user? @enessvg Did you run migrations? Can you show me your user model and did you setup 2FA for the user? @enessvg Yes, I've run migrations. User.php: use Laravel\Fortify\TwoFactorAuthenticatable; use Vormkracht10\TwoFactorAuth\Enums\TwoFactorType; use HasFactory, Notifiable, HasRoles, HasPanelShield, TwoFactorAuthenticatable; protected function casts(): array { return [ 'email_verified_at' => 'datetime', 'password' => 'hashed', 'two_factor_type' => TwoFactorType::class, ]; } I just did a fresh installation here and got no errors: https://github.com/Baspa/reproduction-repo you can check my latest commit @enessvg OK, I will try to reinstall and try again, thanks. I just did a fresh installation here and got no errors: https://github.com/Baspa/reproduction-repo you can check my latest commit @enessvg Any updates on this issue? @enessvg @Baspa Sorry for not replying, I still haven't tried it, I will try it as soon as possible and write here I looked at using this package and got the same error, it would appear the casting doesn't cast null/empty two_factor_types. Ideally there should be either a default, or login if 2fa type is not set. Thanks for checking both, will try to look for a fix later today @enessvg / @tonypartridge Yep! Using jetsream Many thanks Tony On Wed, 2 Oct 2024 at 10:15, Bas van Dinther @.***> wrote: Did you guys install this package after already using Laravel Fortify? I guess this issue might happen when the user already has set the 2FA on their user but didn't use this package yet. Then the 2FA might be "enabled", which causes the user to be redirected to the 2FA login page, but there is no 2FA type set. If that's the case I might need to change the documentation. @enessvg https://github.com/enessvg / @tonypartridge https://github.com/tonypartridge — Reply to this email directly, view it on GitHub https://github.com/vormkracht10/filament-2fa/issues/47#issuecomment-2388001135, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKWBFS7ABHW6OE3HLVOFK3ZZO2S7AVCNFSM6AAAAABPBS6SUGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBYGAYDCMJTGU . You are receiving this because you were mentioned.Message ID: @.***> Did you guys install this package after already using Laravel Fortify? I guess this issue might happen when the user already has set the 2FA on their user but didn't use this package yet. Then the 2FA might be "enabled", which causes the user to be redirected to the 2FA login page, but there is no 2FA type set. If that's the case I might need to change the documentation. @enessvg / @tonypartridge https://filamentphp.com/plugins/vormkracht10-2fa#installation I followed what was said here and got that error @enessvg / @tonypartridge after I merge #49 in a few minutes you should update the package and re-run php artisan filament-two-factor-auth:install. Remove previous migration from this package before running the install command. When you run the migrations through the install command again it will now prompt if you want to update existing users to set their missing two_factor_type. It will default to authenticator. @Baspa Thank you, I'm glad I wrote this error here Good stuff, will review. only thoughts are: Would you like us to set the two factor type to "authenticator" for existing users? (yes/no) should be Would you like us to set the two factor type to "authenticator" for users that previously used 2FA in fortify? (yes/no) I tried this and disabled normal 2fa and then tried email but never got the email even with queue working. only thoughts are: Would you like us to set the two factor type to "authenticator" for existing users? (yes/no) should be Would you like us to set the two factor type to "authenticator" for users that previously used 2FA in fortify? (yes/no) Feel free to submit a PR! @tonypartridge I tried this and disabled normal 2fa and then tried email but never got the email even with queue working. What are the exact steps you did? Have you set a two_factor_type on the user? Does it has its email set?
gharchive/issue
2024-09-29T10:59:22
2025-04-01T04:36:15.217472
{ "authors": [ "Baspa", "enessvg", "tonypartridge" ], "repo": "vormkracht10/filament-2fa", "url": "https://github.com/vormkracht10/filament-2fa/issues/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
627465638
Fixes a server crashing bug Occurs currently when someone submits a file upload without a type selected. Product approved this approach to just set a default on the radio buttons. @bmadhusu Yeah, I debated whether or not to include it, and sort of reversed my stance on it for this particular repo only. Given that exceptions in the Node app seem to bubble all the way out to killing the docker container right now, and quay.io is of dubious stability, I think it's maybe safer to include the change to the fleet template. Previous to this bug I would have expected the node process to crash, but nodemon to restart it without it breaking out and crashing the docker container. But maybe that's just a dev setting? Anyway, given that any server exception could take down the process, and any user trying it multiple times can bring down the dashboard, the tradeoff of having this in place for the node app is maybe worth it over the chance that quay.io could go down between us uploading an image and a server trying to download it and failing and silently running old code. All of which is to say I'm not super thrilled to include it, but the very real possibility of a user crashing the entire dashboard again with a different unknown bug is probably higher.
gharchive/pull-request
2020-05-29T18:43:40
2025-04-01T04:36:15.221296
{ "authors": [ "tank157" ], "repo": "votinginfoproject/Metis", "url": "https://github.com/votinginfoproject/Metis/pull/483", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
784687828
[BUG] Colab installs are unstable TLDR; Colab support is unstable until this issue can be resolved When working with Colab notebooks today, importing fiftyone suddenly stopped working in fresh runtimes. This issue also arose for me when installing versions fiftyone that previously worked, e.g. fiftyone==0.7.0.1 and fiftyone-db==0.2.0, so I would assume something has changed about Colab itself. At a minimum, the issue relates to the main process timing out when requesting the port number of the database service. This issue was resolved during development of Colab support, but now it seems to have reappeared for different reasons. Traceback: --------------------------------------------------------------------------- ServiceListenTimeout Traceback (most recent call last) <ipython-input-3-34017a3769a1> in <module>() ----> 1 import fiftyone 12 frames /usr/local/lib/python3.6/dist-packages/fiftyone/__init__.py in <module>() 22 __path__ = extend_path(__path__, __name__) 23 ---> 24 from fiftyone.__public__ import * 25 import fiftyone.constants as _foc 26 from fiftyone.utils.uid import _get_user_id /usr/local/lib/python3.6/dist-packages/fiftyone/__public__.py in <module>() 9 import fiftyone.core.service as fos 10 ---> 11 _database_service = fos.DatabaseService() 12 config = foc.load_config() 13 /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in __init__(self) 196 197 def __init__(self): --> 198 super().__init__() 199 200 @property /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in __init__(self) 77 self.child = None 78 if not self._disabled: ---> 79 self.start() 80 81 def __del__(self): /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in start(self) 281 import fiftyone.core.odm.database as food 282 --> 283 food.set_default_port(self.port) 284 food.get_db_conn() 285 /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in port(self) 268 @property 269 def port(self): --> 270 return self._wait_for_child_port() 271 272 def start(self): /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in _wait_for_child_port(self, port, timeout) 175 raise ServiceListenTimeout(etau.get_class_name(self), port) 176 --> 177 return find_port() 178 179 @classmethod /usr/local/lib/python3.6/dist-packages/retrying.py in wrapped_f(*args, **kw) 47 @six.wraps(f) 48 def wrapped_f(*args, **kw): ---> 49 return Retrying(*dargs, **dkw).call(f, *args, **kw) 50 51 return wrapped_f /usr/local/lib/python3.6/dist-packages/retrying.py in call(self, fn, *args, **kwargs) 210 if not self._wrap_exception and attempt.has_exception: 211 # get() on an attempt with an exception should cause it to be raised, but raise just in case --> 212 raise attempt.get() 213 else: 214 raise RetryError(attempt) /usr/local/lib/python3.6/dist-packages/retrying.py in get(self, wrap_exception) 245 raise RetryError(self) 246 else: --> 247 six.reraise(self.value[0], self.value[1], self.value[2]) 248 else: 249 return self.value /usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb) 701 if value.__traceback__ is not tb: 702 raise value.with_traceback(tb) --> 703 raise value 704 finally: 705 value = None /usr/local/lib/python3.6/dist-packages/retrying.py in call(self, fn, *args, **kwargs) 198 while True: 199 try: --> 200 attempt = Attempt(fn(*args, **kwargs), attempt_number, False) 201 except: 202 tb = sys.exc_info() /usr/local/lib/python3.6/dist-packages/fiftyone/core/service.py in find_port() 173 except psutil.Error: 174 pass --> 175 raise ServiceListenTimeout(etau.get_class_name(self), port) 176 177 return find_port() ServiceListenTimeout: fiftyone.core.service.DatabaseService failed to bind to port Continuing to monitor this. It is almost certainly related to as of yet inexplicable changes in Colab VMs. As of this writing installs are working again. Continuing to monitor this. It is almost certainly related to as of yet inexplicable changes in Colab VMs. As of this writing installs are working again. Stale
gharchive/issue
2021-01-13T00:19:15
2025-04-01T04:36:15.231546
{ "authors": [ "benjaminpkane" ], "repo": "voxel51/fiftyone", "url": "https://github.com/voxel51/fiftyone/issues/772", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1223543038
Fix path in App connection Resolves #1717 Thanks! I've checked the fix by pulling the latest branch and running it in my environment. It's working. Thanks for the fix!
gharchive/pull-request
2022-05-03T01:41:45
2025-04-01T04:36:15.233303
{ "authors": [ "benjaminpkane", "jdalbosc-cisco" ], "repo": "voxel51/fiftyone", "url": "https://github.com/voxel51/fiftyone/pull/1719", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1725991543
build-docs gh action checkout teams Proposal for how to include management SDK in build-docs github action. How it works: Checks out fiftyone as normal Checks out main branch of fiftyone-teams under fiftyone/fiftyone-teams needs deployment key secret to checkout private repo (see below) harder to correspond branches dev->dev or releaseX->releaseY, so for now just use main which should be latest release. Uses -t fiftyone-teams option to generate_docs.bash to link relevant folders in before building. What would be needed to test and finalize it: Create an ssh key Upload private key as a secret to fiftyone, called TEAMS_SSH_PRIVATE_KEY Upload public key as a deployment key to fiftyone-teams with read only access. Delete generated ssh key from local machine I think it is safe because: deployment key only has read only access to the single repo private ssh key is contained within github secret and nowhere else Alright I fixed and tested it - now it will build properly with fiftyone-teams/main linked in (it will probably keep failing until fiftyone-teams release branch is merged into main)
gharchive/pull-request
2023-05-25T14:56:50
2025-04-01T04:36:15.238101
{ "authors": [ "swheaton" ], "repo": "voxel51/fiftyone", "url": "https://github.com/voxel51/fiftyone/pull/3113", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
96905412
Issues with setup Hey all, I just want to say this project is absolutely awesome. I'm considering adopting it as the platform for the minor graphic and news app requests we have the San Francisco Chronicle. I'd also be down for adding Python support and other blueprints once it's deployed here :) Anyway, I ran into an error setting up the project. For whatever reason, while the output looks like its working, the project isn't retrieving the front-end code from the project template. As far as I can tell, I have rbenv, Ruby and redis installed correctly. Perhaps I'm missing a step (I also don't know what the DB login is haha) so let me know if I'm missing something. Thanks so much! Issue Autotune setup, as listed in the wiki, starts a new rails project and runs the database migrations, but does not install any of the Autotune template code from this repo (e.g., the appjs directory is missing). Here's what I see when I navigate to http://localhost:3000: Reproduce Follow instructions from: https://github.com/voxmedia/autotune/wiki/setup $ rails new autotune_app -m https://raw.githubusercontent.com/voxmedia/autotune/master/rails_template.rb $ cd autotune_app $ tree . . ├── Gemfile ├── Gemfile.lock ├── Procfile ├── README.rdoc ├── Rakefile ├── app │   ├── assets │   │   ├── images │   │   ├── javascripts │   │   │   └── application.js │   │   └── stylesheets │   │   └── application.css │   ├── controllers │   │   ├── application_controller.rb │   │   └── concerns │   ├── helpers │   │   └── application_helper.rb │   ├── mailers │   ├── models │   │   └── concerns │   └── views │   └── layouts │   └── application.html.erb ├── bin │   ├── bundle │   ├── rails │   ├── rake │   ├── setup │   └── spring ├── config │   ├── application.rb │   ├── boot.rb │   ├── database.yml │   ├── environment.rb │   ├── environments │   │   ├── development.rb │   │   ├── production.rb │   │   └── test.rb │   ├── initializers │   │   ├── assets.rb │   │   ├── autotune.rb │   │   ├── backtrace_silencers.rb │   │   ├── cookies_serializer.rb │   │   ├── filter_parameter_logging.rb │   │   ├── inflections.rb │   │   ├── mime_types.rb │   │   ├── omniauth.rb │   │   ├── resque.rb │   │   └── session_store.rb │   ├── locales │   │   └── en.yml │   ├── routes.rb │   ├── secrets.yml │   └── unicorn.rb ├── config.ru ├── db │   ├── development.sqlite3 │   ├── migrate │   │   ├── 20150723203757_create_blueprints.autotune.rb │   │   ├── 20150723203758_create_tags.autotune.rb │   │   ├── 20150723203759_create_blueprint_tags.autotune.rb │   │   ├── 20150723203760_create_projects.autotune.rb │   │   ├── 20150723203761_create_users.autotune.rb │   │   ├── 20150723203762_create_authorizations.autotune.rb │   │   ├── 20150723203763_add_user_to_projects.autotune.rb │   │   ├── 20150723203764_change_user_meta_format.autotune.rb │   │   ├── 20150723203765_add_config_to_projects.autotune.rb │   │   ├── 20150723203766_add_pub_date_to_projects.autotune.rb │   │   ├── 20150723203767_expand_user_extra_field_length.autotune.rb │   │   └── 20150723203768_create_themes.autotune.rb │   ├── schema.rb │   └── seeds.rb ├── lib │   ├── assets │   └── tasks ├── log │   └── development.log ├── public │   ├── 404.html │   ├── 422.html │   ├── 500.html │   ├── favicon.ico │   └── robots.txt ├── test │   ├── controllers │   ├── fixtures │   ├── helpers │   ├── integration │   ├── mailers │   ├── models │   └── test_helper.rb ├── tmp │   └── cache │   └── assets └── vendor └── assets ├── javascripts └── stylesheets Possible Solutions I haven't used Rails since Rails 3 so I'm not sure what's causing the error. I checked to see if there was a template bug in 4.2.x but I couldn't find anything. It looks like the db migrations execute correctly so SOMETHING is coming through. I'll keep digging and let you know if I find anything. System I'm on Mac OS X 10.10.3 and using iTerm 2 as my terminal emulator $ which ruby rails # => /Users/AaronWilliams/.rbenv/shims/ruby # => /Users/AaronWilliams/.rbenv/shims/rails $ ruby -v && rails -v # => ruby 2.1.5p273 (2014-11-13 revision 48405) [x86_64-darwin14.0] # => Rails 4.2.3 Oh and here's the log output from the rails new command: $ rails new autotune_app -m https://raw.githubusercontent.com/voxmedia/autotune/master/rails_template.rb create create README.rdoc create Rakefile create config.ru create .gitignore create Gemfile create app create app/assets/javascripts/application.js create app/assets/stylesheets/application.css create app/controllers/application_controller.rb create app/helpers/application_helper.rb create app/views/layouts/application.html.erb create app/assets/images/.keep create app/mailers/.keep create app/models/.keep create app/controllers/concerns/.keep create app/models/concerns/.keep create bin create bin/bundle create bin/rails create bin/rake create bin/setup create config create config/routes.rb create config/application.rb create config/environment.rb create config/secrets.yml create config/environments create config/environments/development.rb create config/environments/production.rb create config/environments/test.rb create config/initializers create config/initializers/assets.rb create config/initializers/backtrace_silencers.rb create config/initializers/cookies_serializer.rb create config/initializers/filter_parameter_logging.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/session_store.rb create config/initializers/wrap_parameters.rb create config/locales create config/locales/en.yml create config/boot.rb create config/database.yml create db create db/seeds.rb create lib create lib/tasks create lib/tasks/.keep create lib/assets create lib/assets/.keep create log create log/.keep create public create public/404.html create public/422.html create public/500.html create public/favicon.ico create public/robots.txt create test/fixtures create test/fixtures/.keep create test/controllers create test/controllers/.keep create test/mailers create test/mailers/.keep create test/models create test/models/.keep create test/helpers create test/helpers/.keep create test/integration create test/integration/.keep create test/test_helper.rb create tmp/cache create tmp/cache/assets create vendor/assets/javascripts create vendor/assets/javascripts/.keep create vendor/assets/stylesheets create vendor/assets/stylesheets/.keep apply https://raw.githubusercontent.com/voxmedia/autotune/master/rails_template.rb gemfile resque (~> 1.25.2) gemfile omniauth-github (~> 1.1.2) gemfile foreman (~> 0.77.0) gemfile unicorn-rails (~> 2.2.0) gemfile https://github.com/ryanmark/s3deploy-ruby.git gemfile https://github.com/voxmedia/autotune.git create Procfile append Rakefile initializer resque.rb initializer omniauth.rb initializer autotune.rb create config/unicorn.rb route mount Autotune::Engine => '/' run rm config/initializers/wrap_parameters.rb from "." About to download stuff. It'll be a minute. run bundle install Updating https://github.com/ryanmark/s3deploy-ruby.git Updating https://github.com/voxmedia/autotune.git Fetching gem metadata from https://rubygems.org/............. Fetching version metadata from https://rubygems.org/... Fetching dependency metadata from https://rubygems.org/.. Resolving dependencies....... Using rake 10.4.2 Using i18n 0.7.0 Using json 1.8.3 Using minitest 5.7.0 Using thread_safe 0.3.5 Using tzinfo 1.2.2 Using activesupport 4.2.3 Using builder 3.2.2 Using erubis 2.7.0 Using mini_portile 0.6.2 Using nokogiri 1.6.6.2 Using rails-deprecated_sanitizer 1.0.3 Using rails-dom-testing 1.0.6 Using loofah 2.0.2 Using rails-html-sanitizer 1.0.2 Using actionview 4.2.3 Using rack 1.6.4 Using rack-test 0.6.3 Using actionpack 4.2.3 Using globalid 0.3.5 Using activejob 4.2.3 Using mime-types 2.6.1 Using mail 2.6.3 Using actionmailer 4.2.3 Using activemodel 4.2.3 Using arel 6.0.2 Using activerecord 4.2.3 Using execjs 2.5.2 Using autoprefixer-rails 5.2.1.1 Using sass 3.4.16 Using bootstrap-sass 3.3.5.1 Using multi_json 1.11.2 Using jbuilder 2.3.1 Using hashie 3.4.2 Using omniauth 1.2.2 Using bundler 1.10.6 Using thor 0.19.1 Using railties 4.2.3 Using sprockets 3.2.0 Using sprockets-rails 2.3.2 Using rails 4.2.3 Using mono_logger 1.1.0 Using redis 3.2.1 Using redis-namespace 1.5.2 Using rack-protection 1.5.3 Using tilt 1.4.1 Using sinatra 1.4.6 Using vegas 0.1.11 Using resque 1.25.2 Using aws-sdk-v1 1.64.0 Using aws-sdk 1.64.0 Using s3deploy 0.2.1 from https://github.com/ryanmark/s3deploy-ruby.git (at master) Using sass-rails 5.0.3 Using autotune 0.0.1 from https://github.com/voxmedia/autotune.git (at master) Using debug_inspector 0.0.2 Using binding_of_caller 0.7.2 Using columnize 0.9.0 Using byebug 5.0.0 Using coffee-script-source 1.9.1.1 Using coffee-script 2.4.1 Using coffee-rails 4.1.0 Using dotenv 1.0.2 Using multipart-post 2.0.0 Using faraday 0.9.1 Using foreman 0.77.0 Using jquery-rails 4.0.4 Using jwt 1.5.1 Using kgio 2.9.3 Using multi_xml 0.5.5 Using oauth2 1.0.0 Using omniauth-oauth2 1.3.1 Using omniauth-github 1.1.2 Using raindrops 0.15.0 Using rdoc 4.2.0 Using sdoc 0.4.1 Using spring 1.3.6 Using sqlite3 1.3.10 Using turbolinks 2.5.3 Using uglifier 2.7.1 Using unicorn 4.9.0 Using unicorn-rails 2.2.0 Using web-console 2.2.1 Bundle complete! 18 Gemfile dependencies, 82 gems now installed. Use `bundle show [gemname]` to see where a bundled gem is installed. run bundle exec spring binstub --all * bin/rake: spring inserted * bin/rails: spring inserted run bundle exec rake autotune:install:migrations from "." Copied migration 20150723203757_create_blueprints.autotune.rb from autotune Copied migration 20150723203758_create_tags.autotune.rb from autotune Copied migration 20150723203759_create_blueprint_tags.autotune.rb from autotune Copied migration 20150723203760_create_projects.autotune.rb from autotune Copied migration 20150723203761_create_users.autotune.rb from autotune Copied migration 20150723203762_create_authorizations.autotune.rb from autotune Copied migration 20150723203763_add_user_to_projects.autotune.rb from autotune Copied migration 20150723203764_change_user_meta_format.autotune.rb from autotune Copied migration 20150723203765_add_config_to_projects.autotune.rb from autotune Copied migration 20150723203766_add_pub_date_to_projects.autotune.rb from autotune Copied migration 20150723203767_expand_user_extra_field_length.autotune.rb from autotune Copied migration 20150723203768_create_themes.autotune.rb from autotune run bundle exec rake db:migrate from "." == 20150723203757 CreateBlueprints: migrating ================================= -- create_table(:autotune_blueprints) -> 0.0030s == 20150723203757 CreateBlueprints: migrated (0.0030s) ======================== == 20150723203758 CreateTags: migrating ======================================= -- create_table(:autotune_tags) -> 0.0009s == 20150723203758 CreateTags: migrated (0.0009s) ============================== == 20150723203759 CreateBlueprintTags: migrating ============================== -- create_table(:autotune_blueprint_tags) -> 0.0019s -- add_foreign_key(:autotune_blueprint_tags, :autotune_blueprints, {:column=>:blueprint_id}) -> 0.0000s -- add_foreign_key(:autotune_blueprint_tags, :autotune_tags, {:column=>:tag_id}) -> 0.0000s == 20150723203759 CreateBlueprintTags: migrated (0.0020s) ===================== == 20150723203760 CreateProjects: migrating =================================== -- create_table(:autotune_projects) -> 0.0034s -- add_foreign_key(:autotune_projects, :autotune_blueprints, {:column=>:blueprint_id}) -> 0.0000s == 20150723203760 CreateProjects: migrated (0.0035s) ========================== == 20150723203761 CreateUsers: migrating ====================================== -- create_table(:autotune_users) -> 0.0014s == 20150723203761 CreateUsers: migrated (0.0014s) ============================= == 20150723203762 CreateAuthorizations: migrating ============================= -- create_table(:autotune_authorizations) -> 0.0018s -- add_foreign_key(:autotune_authorizations, :autotune_users, {:column=>:user_id}) -> 0.0000s == 20150723203762 CreateAuthorizations: migrated (0.0019s) ==================== == 20150723203763 AddUserToProjects: migrating ================================ -- add_reference(:autotune_projects, :user, {:index=>true}) -> 0.0013s -- add_foreign_key(:autotune_projects, :autotune_users, {:column=>:user_id}) -> 0.0000s == 20150723203763 AddUserToProjects: migrated (0.0014s) ======================= == 20150723203764 ChangeUserMetaFormat: migrating ============================= == 20150723203764 ChangeUserMetaFormat: migrated (0.0076s) ==================== == 20150723203765 AddConfigToProjects: migrating ============================== -- add_column(:autotune_projects, :blueprint_config, :text) -> 0.0004s == 20150723203765 AddConfigToProjects: migrated (0.0004s) ===================== == 20150723203766 AddPubDateToProjects: migrating ============================= -- add_column(:autotune_projects, :published_at, :datetime) -> 0.0003s -- add_column(:autotune_projects, :data_updated_at, :datetime) -> 0.0002s == 20150723203766 AddPubDateToProjects: migrated (0.0006s) ==================== == 20150723203767 ExpandUserExtraFieldLength: migrating ======================= -- change_column(:autotune_authorizations, :extra, :text, {:limit=>131072}) -> 0.0079s == 20150723203767 ExpandUserExtraFieldLength: migrated (0.0080s) ============== == 20150723203768 CreateThemes: migrating ===================================== -- create_table(:autotune_themes) -> 0.0009s -- create_table(:autotune_blueprints_themes, {:id=>false}) -> 0.0011s -- add_reference(:autotune_projects, :theme, {:index=>true}) -> 0.0013s -- add_foreign_key(:autotune_projects, :autotune_themes, {:column=>:theme_id}) -> 0.0000s create theme: generic => Generic create theme: mynewsorg => My news organization -- remove_column(:autotune_projects, :theme) -> 0.0173s == 20150723203768 CreateThemes: migrated (0.0543s) ============================ ======================================================= Your new Autotune application is now ready to rock! cd autotune_chronicle bundle exec foreman start ======================================================= @aboutaaron What you are seeing is the dev auth screen. You can register a new user there. It accepts any username, email combo. When you set up Autotune, the default auth provider is github for production environment. But in dev mode the dev auth kicks in. See code here You can modify omniauth initializer to change this behavior or change the auth provider entirely. It is under config/initializers/omniauth.rb Thanks for flagging this. We'll add this to the setup documentation. Ah got it. I'm just an idiot. This makes much more sense. I'm relieved this is just me being an idiot instead of a setup error. Thanks @kavyasukumar!
gharchive/issue
2015-07-23T21:05:56
2025-04-01T04:36:15.251097
{ "authors": [ "aboutaaron", "kavyasukumar" ], "repo": "voxmedia/autotune", "url": "https://github.com/voxmedia/autotune/issues/227", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1933189579
installs with no treesitter capabilities Getting zero functionality out of this treesitter grammar due to this issue. I am on a M1 Macbook Pro. ❯ nvim --version NVIM v0.9.1 Build type: Release LuaJIT 2.1.0-beta3 system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/opt/homebrew/Cellar/neovim/0.9.1/share/nvim" Result of :TsInstallInfo templ [✓] installed Result of :TSModuleInfo templ ✗ ✗ ✗ ❯ clang --version Apple clang version 14.0.3 (clang-1403.0.22.14.1) Target: arm64-apple-darwin22.5.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin Looks like the issue is that the queries are not being copied into the rtd. I don't know why or how it's done in other treesitter grammars, but something isn't quite right with this one. And even copying the current queries results in only highlighting working, not injections. This is mostly fine for me for now, but would be great to have the other languages injected. Hi. Did you install the repo as a plugin as described here ? Apparently I didn't see that. Makes a lot of sense. Closing, thank you very much. RTFM twice is necessary sometimes to be fair I've added this yesterday, maybe it wasn't there when you looked :)
gharchive/issue
2023-10-09T14:15:27
2025-04-01T04:36:15.303256
{ "authors": [ "gamebox", "vrischmann" ], "repo": "vrischmann/tree-sitter-templ", "url": "https://github.com/vrischmann/tree-sitter-templ/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1303172109
Error while CHECK FILE STATUS - In 'e.cache(t)', 'e.cache' is undefined First of all, thanks for the amazing plugin! I'm currently facing a reproducable issue every time I open my iOS obsidian app. I get the following error messages whenever the sync kicks in: (sorry for the screenshot, can't copy the log lines from the app). Enabling verbose logging doesn't seem to show extra log lines for this error. I also don't get them on the macos app, only the iOS one. Any idea what's going on? Hi, updating to v0.11.10 seems to have solved the issue. Thanks a lot for the quick response and fix! I'll be sending a small contribution on github sponsors to support the development of this awesome plugin! Thanks a lot again! I am very relieved to hear that! And, I appreciate your gratitude. I feel so honored!
gharchive/issue
2022-07-13T09:36:32
2025-04-01T04:36:15.309775
{ "authors": [ "MohamedBassem", "vrtmrz" ], "repo": "vrtmrz/obsidian-livesync", "url": "https://github.com/vrtmrz/obsidian-livesync/issues/90", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
197501869
More than 5 translations "freeze up" Hi, When I attempt to use the auto fill or auto translate functions in the backend and there are more than 5 in the queue, they seem to freeze up like so: http://prntscr.com/dnoz8k I usually have to reboot FPM after this Any clues how to fix this error? @mikenolimits, you can change the JS to only do one at a time. The code that determines that is https://github.com/vsch/laravel-translation-manager/blob/master/public/js/translations_page.js#L451-L453 If you just leave the fireTranslate() without the loop. Is there something that would have an issue with handling concurrent requests? Database connection limit, etc. The auto-translate gets the translation from Yandex, then makes a request to LTM controller to set the translation. Each one winds up being a separate request. I will give that a shot and let you know if it works -- for reference my dev and production environment was php7.0 on laravel forge and laravel homestead. @mikenolimits, I also use php 7.0 for dev but php 5.6 for prod on AWS. I don't use laravel forge or homestead. When I started with Laravel 4.2 I could not get homestead working so I learned to roll my own and can't be bothered to switch now. I get enough app breaking changes from Laravel version updates, would not want the dev environment to have the same issues. Thank you @vsch that solution appears to have worked. Not entirely sure why though! @mikenolimits, not sure why it worked or why there was a problem in the first place?
gharchive/issue
2016-12-25T13:50:23
2025-04-01T04:36:15.322340
{ "authors": [ "mikenolimits", "vsch" ], "repo": "vsch/laravel-translation-manager", "url": "https://github.com/vsch/laravel-translation-manager/issues/56", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
157943325
Poedit doesn’t respect (dark) theme colors under GTK+ Hallo, I just tried Poedit 1.8.7 with a dark theme and I ran into 2 problems. I. The sidebar font has similar colour as the background. II. After typing something like " Ctr+2", to choose a suggested translation, now, the text, in the box with the translated string (that is, the text that was just added), also changes colour to something similar to the background. I am attaching two picture demonstrating the issue. My system is... Mageia 5 32bit Poedit 1.8.7 IceWM Gtk3 theme: BlackMate The BlackMate theme is quite "standard", so I guess this is probably a Poedit issue. The BlackMate theme is quite "standard”, Well no, not really — never header of it, and all of your system is pretty crazily non-standard… That doesn’t mean it isn’t a bug, of course, but it decreases its priority. An itemized list of specific issues would be nice. You posted some screenshots without any comments now, but aside from sidebar, it’s unclear which of the ugliness there is normal for the theme and which things exactly are deviations. What’s up with two different text colors (with no explanation about the different in state) for example? (And of course, as always, patches would be even nicer!) Ah, never mind, got lost in the original comment’s formatting... "never header of it " ? https://github.com/mate-desktop/mate-themes/tree/master/desktop-themes/BlackMATE quote: "and all of your system is pretty crazily non-standard" With " Poedit 1.8.7"... I think you are right ;) . "never header of it “ ? Yes, this may shock you, but I really never heard about some of the more obscure themes used by more obscure DEs before. No, that doesn’t mean I do not believe in its existence and that you have to provide a link. Please limit comments to things that contribute to fixing. i have tested Poedit on many distros, KDE 5.x, standard dark themes, but on all it sems to have this issue with background there and the editable text problem. on many distros Doubtful about “many”. But sure, I don’t doubt reproducibility. Your PR would be welcome, but please don’t spam the issue tracker with pointless noise: first a duplicate issue, then this comment that adds nothing to the discussion. If you contribute, great, it would be most welcome. If you don’t, that’s fine too, but at least please don’t take developers time from actual work. Thank you. I don't spam anything, i just confirm this problem to. I have tested it on "many" distros: fedora, manjaro, linux mint, antergos, kaos linux, and so on..... changed the theme to dark, and the problem is there, on all of this distros. And this problem is presistent long time ago, i think about a year or maybe more, i have tested first time the Poedit on dark theme, and it was there. Today, this problem still there. I don't think this is specific to BlackMate but to any dark theme. Here the 3 I'm using: Arc Dark Vertex Dark Adwaita Dark I also tested the Ubuntu dark one, same issue. I think the issues is simply that white background color is hard coded on some widgets where it shouldn't (Every text entries, right sidebar...). This make the app barely useable with a dark theme because the cursor is white on white when editing entries. Suggestions (or any content that is in the right sidebar) is not readable to too. Consequence: its really exhausting when you have thousands of translations to handle, you are searching for the cursor to fix an entry, or trying to use a suggestion. (I would have contributed but I'm no C++ nor wxGTK developper) PS: @vslavik no offense but I agree with @nikoss , even if BlackMate in particular is not a standard theme, having a dark theme is now standard on Linux (even Ubuntu come with one by default), the 3 ones I provided as example are some of the most used theme under gnome) and so it's not really a edge case. PS2: I'm glad to help in any way if I can I don't think this is specific to BlackMate That's kind of obvious from all the references right above your comment; consequently, that super-long comment is unnecessary and adds precious little to the issue; please limit such noise. (I would have contributed but I'm no C++ nor wxGTK developper) Nobody is born one, you know... Version 2.2 officially supports dark themes: https://poedit.net/news/xliff-and-dark-mode/ Any bugs in it should be filed as (separate) bugs.
gharchive/issue
2016-06-01T15:54:28
2025-04-01T04:36:15.337095
{ "authors": [ "Getron", "nikoss", "noirbizarre", "vslavik" ], "repo": "vslavik/poedit", "url": "https://github.com/vslavik/poedit/issues/273", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
648471558
readCSVObjects() object can't be cast When called the readCSVObjects() method the resulting object creating cannot be cast to a custom interface and therefore no type safety can be maintained. Example: for await (const obj of readCSVObjects()){ const thing = obj as Thing; // this causes an error } The method produces an object of type { [key:string]: string } rather than a type of any. I understand the use of an any type would suggest that the csv object might contain values that aren't strings, but as such no interface can be cast with the resulting object which I think is more apt to error. Hi @atinybeardedman! I could transfer ownership of result type to library users and change function definition to readCSVObjects<T = { [key: string]: string }>(): AsyncIterable<T>. It could make code simpler but it gives no real type safety because of any under the hood. If you want type safety I would suggest to use type guards that could cast generic object to specific type. For example: function isThing(obj: {[key: string]: string} | Thing): obj is Thing { return typeof obj.a === 'string'; } Are type guards fits to your use case? Closed due inactivity.
gharchive/issue
2020-06-30T20:29:48
2025-04-01T04:36:15.345231
{ "authors": [ "atinybeardedman", "vslinko" ], "repo": "vslinko/deno-csv", "url": "https://github.com/vslinko/deno-csv/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
261911307
Predictor for school not available. For LTIME52 I opened the link in my browser with the tag /school/ in the end. A webpage opens but there are no predictions, rankings. Just an empty webpage with the column names. Hi, currently rating predictions for School category are not supported. I would try to add this in the near future.
gharchive/issue
2017-10-01T12:02:15
2025-04-01T04:36:15.346237
{ "authors": [ "Ista2000", "vsp4" ], "repo": "vsp4/codechef-rating-predictor", "url": "https://github.com/vsp4/codechef-rating-predictor/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1211771298
Initial commit for whosaidit implementation Who said it? Players guess the correct OP for a random quote from the server co-authored by @mcab @ddcha3 How can we make the polling work with asyncio without having to use a spinlock?
gharchive/pull-request
2022-04-22T03:30:16
2025-04-01T04:36:15.347299
{ "authors": [ "mcab", "vsporeddy" ], "repo": "vsporeddy/chowder-bot", "url": "https://github.com/vsporeddy/chowder-bot/pull/65", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1118344323
For what is the TryInto<usize> implementation needed? I wanted to be able to specify format strings in a config file and then use them at runtime. Your crate enabled that I could define this format function which is like a one-arg version of format! but where the format string is specified at runtime. Thanks a lot! I have to confess that parts of the implementation are a bit above my current rust foo. Especially I don't quite understand why I had to implement TryInto<usize> for the type I use for positional arguments. What is the intent behind it? Formatting can include positional or named width specifiers (see the test-code below). For arguments to be used as width parameter of another format argument, passed values must be convertible to usize. Instead of implementing TryInto<usize>, you can also directly implement ConvertToSize. https://github.com/vstojkovic/rt-format/blob/02c02ef3efbf7020ce43908a3863d74232345409/tests/test_output.rs#L72-L89 @oberien is correct, this is to provide support for providing width and precision in your positional/named arguments. I just wanted to add that I don't really like that I had to make that mandatory, but I haven't found a way to make it optionally supported depending on whether you implement the trait or not. I'm not a Rust expert, so maybe I'm missing something, but to me it looks like I have to make it mandatory until the trait specialization is stabilized. @oberien @vstojkovic Thank you both, that makes sense, obviously. And now I'm also confident that I implemented it the right way, giving an Err each time, cause I only ever provide a &str as argument which won't work with $. Once you know where it is needed, it all makes sense. :-)
gharchive/issue
2022-01-29T22:40:17
2025-04-01T04:36:15.354073
{ "authors": [ "oberien", "tsdh", "vstojkovic" ], "repo": "vstojkovic/rt-format", "url": "https://github.com/vstojkovic/rt-format/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
573479544
Move binary files out of version control Binary file should not exist here: https://github.com/vtereshkov/xdpw/blob/1dff7ebc5d038951a8f971e7f1d378720b73ba35/xdpw.exe Here instead: https://github.com/vtereshkov/xdpw/releases Yes, I am considering it. But now it is convenient for me to clone just one folder and do whatever I want with it. Do you experience any problems with antivirus false positives when cloning a folder containing an .exe?
gharchive/issue
2020-03-01T04:46:47
2025-04-01T04:36:15.359833
{ "authors": [ "cup", "vtereshkov" ], "repo": "vtereshkov/xdpw", "url": "https://github.com/vtereshkov/xdpw/issues/13", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
2248455503
DateElement has no appendToBody Environment vuejs: 3.4.21, vueform: 1.9.5 Using tailwind Reproduction A minimal reproduction has been created. In it the Date Picker "vertical" space is limited for the form field. Since we allow scrolling in the overflow (there could be many more form elements) expanding the date picker shows the popup in this space when if it was attached to the body it could overlay correct on top https://stackblitz.com/edit/github-sdivt5?file=src%2FApp.vue,index.html Describe the bug Any Date Element appearing the bottom of the screen, or near the bottom of its container will open within the container. This same behavior was seen with SelectElement before the "append-to-body" flag was added. As a popup control this should be handled in the same fashion. Additional context No response Logs No response I modified the example to show the differences with SelectElement and DateElement in this example
gharchive/issue
2024-04-17T14:34:09
2025-04-01T04:36:15.450609
{ "authors": [ "jadrake75" ], "repo": "vueform/vueform", "url": "https://github.com/vueform/vueform/issues/223", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
351937123
Revisión General @rgomez90, gracias por tu disposición. La idea es llegar a una version 100% traducida del branch 2.5, que es la version con la que se comenzo esta labor. Por lo que he crado una copia de dicho branch bajo el nombre 2.5-es. Los pasos a seguir serian: Hacer un fork de esta repo Checkout master y hacer una copia del contenido (solamente carpeta src) en algun lugar fuera de la repo. Checkout 2.5-es, aplicar y revisar archivo por archivo de la carpeta src previamente copiada Subirlo todo a tu repo en gitthub y crear un Pull Request (rgomez90:es-ES => vuejs-es:es-ES) Recomiendo comenzar por esta carpeta y crees un PR por cada archivo. Dime si me dejo algo o precisas de mas informacion. PD: no olvides revisar las Pautas de traducción. Ok, perfecto. Ya me puse a ello. Un par de preguntas: Bindings lo traducimos a enlaces de datos? En los enlaces a páginas externas como por ej. Mozilla Developer ponemos el link siempre de la versión original de Vue Docs o si existe la misma página en español usamos ese? No se como traducir bind, seria bueno encontrar un ejemplo en alguna doc que lo utilize. Si, existe enlace en español me parece bien usarlo. Unicamente yo, al parecer... Sí, la idea es seguir adelante segun las posibilidades. Desconocia GitLocalize, parece interesante. Les he pedido una cuenta OSS, a ver. Data Bindings y bind se traduce a enlaces de datos y enlazar a Microsoft WPF Docs Blog Javi Suarez [Otro Blog] (https://www.fullstack.pe/blog/angular-data-binding) A mi me parece bien no traducirlo. @rgomez90, @UchihaCFC, @juansaab. El master branch ha sido actualizado con las revisiones de @rgomez90, por lo que ahora podemos continuar todos desde el master y ceñirnos a el. Ahora toca revisar el master tomando como punto de referencia el branch 2.5 de la repo en Ingles, ya que es la version que estamos traduciendo. Por lo que cada uno escoga una seccion que pueda revisar. Dejo la lista a completar. [x] api [ ] cookbook [ ] examples [x] guide [ ] style-guide @rgomez90 y @UchihaCFC revisaron recientemente las secciones api y guide, pero si dentro de las mismas falta algo, comentadlo. De momento puedo revisar el de examples si queréis y luego tirar por style-guide, si @rgomez90 o @juansaab lo ven también bien. Creo que el de cookbook va a ser el más pesado y podremos dividirlo más por partes. Chicos, el ultimo commit ha resultado ser erroneo, por lo que he vuelto al commit anterior. @rgomez90, cuando puedas revisamos de nuevo tu aporte. He reseteado las tareas en Projects, revisad las para ir asignando. @UchihaCFC, tienes asignado Examples. @rgomez90 me va bien ahora, creas un chat como la otra vez? @miljan-aleksic perdona que no me salto la notificacion de tu respuesta. Quizas lo mejor sea que habras un canal en Gitter asociandolo con este repo. (Lo haria yo, pero al no ser collaborator del repo no puedo). Asi tendriamos un punto central para discutir "live". Si no, dime y creo uno temporal como ayer. Gitter es pesimo, ni siquiera me permite reabrir el canal entre otros. Cuando puedas: elimina tu vuejs-es fork crea el fork de nuevo aplica tus cambios en el master asegurandote que ningun texto ya traducido es revertido (el problema que detecte antes) abre el PR y seguimos comentando en el mismo He reseteado las tareas en Projects, revisad las para ir asignando. @UchihaCFC, tienes asignado Examples. Perfect, me pongo con eso en cuanto pueda y envío PR para revisión. ¿Lo hacemos por secciones dentro de examples como hicimos con la API o mejor todo junto?. @UchihaCFC, todo junto siempre y cuando el commit no sea excesivamente grande. Gracias ^^ @rgomez90, viste mi comentario? Que tal, me gustaría participar, ¿con que podría ayudarles? Gracias Hola @mdxmtz, si te parece te asigno la traduccion de Cookbook, no es muy largo. @miljan-aleksic muy bien @mdxmtz, listo :) @rgomez90, se que andas liado pero crees que lo podrias tener listo esta semana? @mdxmtz pues esa frase es algo confusa para traducir, porque por scope en programación lo entendemos como el ámbito o entorno donde actuamos, al menos yo lo veo así. Tampoco tenemos una traducción genérica específica como para otras palabras, así que en este caso esa frase yo la dejaría en algo como: 'alcanzamos propiedades de instancia con $ para evitar esto', pero a ver qué dice el resto. Si nos dices en qué parte de lo que estás traduciendo está, igual encontramos algo más acorde al contexto en el que esté.
gharchive/issue
2018-08-19T21:45:17
2025-04-01T04:36:15.465804
{ "authors": [ "UchihaCFC", "mdxmtz", "miljan-aleksic", "rgomez90" ], "repo": "vuejs-es/vuejs.org", "url": "https://github.com/vuejs-es/vuejs.org/issues/48", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
333817625
Problem when importing firebase-messaging-sw to SW service worker. It's returning to me the following error in console: Here is my configuration: new SWPrecacheWebpackPlugin({ cacheId: 'teste', filename: 'service-worker.js', importScripts: ['./firebase-messaging-sw.js'], staticFileGlobs: ['dist/**/*.*'], minify: true, stripPrefix: 'dist/' }) Here is my firebase-messaging-sw.js: import * as firebase from 'firebase' require('firebase/firestore') firebase.initializeApp({config}) const messaging = firebase.messaging() Error during service worker registration: TypeError: Failed to register a ServiceWorker: ServiceWorker script evaluation failed Me to, how to add? same problem, any solution?
gharchive/issue
2018-06-19T20:01:13
2025-04-01T04:36:15.472017
{ "authors": [ "donaldboulton", "eeerrrttty", "m16u31D" ], "repo": "vuejs-templates/pwa", "url": "https://github.com/vuejs-templates/pwa/issues/193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1690602785
Add vue-global-alert-utility General ✏️ Mark the necessary items without changing the structure of the PR template. [x] Pull request template structure not broken Type ℹ️ What types of changes does your code introduce? 👉 Put an x in the boxes that apply [ ] Fix [x] Feature Checklist ℹ️ Check all checkboxes - this will indicate that you have done everything in accordance with the rules in CONTRIBUTING. 👉 Put an x in the boxes that apply. [x] Title as described [x] Make sure you put things in the right category! [x] Always add your items to the end of a list Open Source [x] Link description does not contain a link to an author / third-party resource [ ] The documentation (README) contains a description of the project, illustration of the project with a demo or screenshots and a CONTRIBUTING section [x] The documentation is in English. [x] The project is active and maintained. [x] The project accepts contributions. [x] Not a commercial product Apps/Websites [ ] The website is available without errors or ssl certificate problems, and load in a reasonable amount of time. [ ] The website is using vuejs intensively. It should detect vue with vue-devtools. If you cannot detect vue with vue-devtools due to work at non public pages (e.g. for enterprise website), you can send Pull Request with screenshot that detected it. [ ] The website is original and not too simple. For that reason, blogs and simple landing pages are rejected. [ ] A commercial product using Vue, provided that guests could reasonably check out how Vue was used (i.e. A headless CMS which uses Vue for the Admin/editor Area and offers a free tier). Hi, On: "The documentation (README) contains a description of the project, illustration of the project with a demo or screenshots and a CONTRIBUTING section" I have a README that contains a description and a demo, but I don't have a CONTRIBUTING section. The project is so small that I have no specific contributing guidelines for it, and it is open for issues and pull requests. Do I still need to add a contributing section? If I do, all I can think of adding to it is saying: "open to pull requests and bug reports/questions through issues", but I think that's a little bit of stating the obvious. Let me know if you want me to add that anyway. Thanks
gharchive/pull-request
2023-05-01T09:27:08
2025-04-01T04:36:15.480777
{ "authors": [ "RashadSaleh" ], "repo": "vuejs/awesome-vue", "url": "https://github.com/vuejs/awesome-vue/pull/4053", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2318033392
refactor: change Components & Libraries order General ✏️ Mark the necessary items without changing the structure of the PR template. [x] Pull request template structure not broken Type ℹ️ What types of changes does your code introduce? 👉 Put an x in the boxes that apply [ ] Fix [ ] Feature Checklist ℹ️ Check all checkboxes - this will indicate that you have done everything in accordance with the rules in CONTRIBUTING. 👉 Put an x in the boxes that apply. [ ] Title as described [ ] Make sure you put things in the right category! [ ] Always add your items to the end of a list Open Source [ ] Link description does not contain a link to an author / third-party resource [ ] The documentation (README) contains a description of the project, illustration of the project with a demo or screenshots and a CONTRIBUTING section [ ] The documentation is in English. [ ] The project is active and maintained. [ ] The project accepts contributions. [ ] Not a commercial product Apps/Websites [ ] The website is available without errors or ssl certificate problems, and load in a reasonable amount of time. [ ] The website is using vuejs intensively. It should detect vue with vue-devtools. If you cannot detect vue with vue-devtools due to work at non public pages (e.g. for enterprise website), you can send Pull Request with screenshot that detected it. [ ] The website is original and not too simple. For that reason, blogs and simple landing pages are rejected. [ ] A commercial product using Vue, provided that guests could reasonably check out how Vue was used (i.e. A headless CMS which uses Vue for the Admin/editor Area and offers a free tier). Clean commit record. about: https://github.com/vuejs/awesome-vue/pull/4177
gharchive/pull-request
2024-05-27T01:07:46
2025-04-01T04:36:15.488723
{ "authors": [ "warmthsea" ], "repo": "vuejs/awesome-vue", "url": "https://github.com/vuejs/awesome-vue/pull/4178", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
693199913
Should scoped style be appliad to teleported elements ? Version 3.0.0-rc.10 Reproduction link https://codesandbox.io/s/elegant-neumann-6j0rl Steps to reproduce Teleport an element with scoped styles What is expected? IMO opinion we should be able to somehow apply scoped styles to teleported elements What is actually happening? scoped styles are not applied any updates on this
gharchive/issue
2020-09-04T13:59:29
2025-04-01T04:36:15.491014
{ "authors": [ "AlexandreBonaventure", "fanckush" ], "repo": "vuejs/core", "url": "https://github.com/vuejs/core/issues/2047", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1094209851
Can not disable or hide timeline feature in Vuex Version 6.0.0-beta.21 Browser and OS info Windows 10, Chrome Steps to reproduce I have disabled the timeline option in Vuex (plugin options) but it still shows the timeline. What is expected? The timelink is hidden or can be removed (or resized or even dragged to another position) What is actually happening? Vuex timeline is still there You can remove the layer in the interface
gharchive/issue
2022-01-05T10:36:59
2025-04-01T04:36:15.492967
{ "authors": [ "Akryum", "ferrykranenburgcw" ], "repo": "vuejs/devtools", "url": "https://github.com/vuejs/devtools/issues/1654", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
442563372
The url of sockjs-node is not same as the browser url (Domain) Version 3.7.0 Reproduction link https://github.com/dy86/vue-cli-issue-demo.git Environment info System: OS: macOS 10.14.4 CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz Binaries: Node: 10.15.3 - /usr/local/bin/node Yarn: 1.15.2 - /usr/local/bin/yarn npm: 6.4.1 - /usr/local/bin/npm Browsers: Chrome: Not Found Firefox: Not Found Safari: 12.1 npmGlobalPackages: @vue/cli: 3.5.5 Steps to reproduce 使用vue-create-app创建好的项目,直接支行npm run serve 我的电脑IP:10.0.1.10 当访问http://localhost:8080/时`sockjs-node`的url为:http://10.0.1.10:8080/sockjs-node/info?t=xxxxxx 当访问http://10.0.1.10:8080/时`sockjs-node`的url为:http://localhost:8080/sockjs-node/info?t=xxxxxx 总是相反的,这样就产生了CORS的错误信息: Access to XMLHttpRequest at 'http://localhost:8080/sockjs-node/info?t=1557469168759' from origin 'http://10.0.1.10:8080' has been blocked by CORS policy: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute. What is expected? `sockjs-node`的url与访问的url相对应,避免控制台出现CORS报错信息 What is actually happening? 我的电脑IP:10.0.1.10 当访问http://localhost:8080/时`sockjs-node`的url为:http://10.0.1.10:8080/sockjs-node/info?t=xxxxxx 当访问http://10.0.1.10:8080/时`sockjs-node`的url为:http://localhost:8080/sockjs-node/info?t=xxxxxx 总是相反的,这样就产生了CORS的错误信息: Access to XMLHttpRequest at 'http://localhost:8080/sockjs-node/info?t=1557469168759' from origin 'http://10.0.1.10:8080' has been blocked by CORS policy: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute. 我不确定这个是不是vue-cli的问题,是不是webpack-dev-server的bug呢? 我也遇到同样的问题。 今天终于发现的问题的原因,我的chrome安装了一个cors的扩展,移除就不再报错了。 虽然不报错,不过localhost与本机IP访问项目时,sockjs对应的正好是相反的,这个也应该是个bug吧? 比如我访问的是: http://localhost:8081/ sockjs的地址却是: ws://10.0.1.10:8081/sockjs-node/407/wexbc0pt/websocket 这个问题和chrome无关,我用火狐也报错,cli 版本 4.4.6 和 4.5.0 都会出问题 问题找到了,本地有软件开了全局代理,关掉就好了
gharchive/issue
2019-05-10T06:45:58
2025-04-01T04:36:15.513088
{ "authors": [ "Ttou", "dy86", "zhaojh329" ], "repo": "vuejs/vue-cli", "url": "https://github.com/vuejs/vue-cli/issues/3973", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
704724532
fix(types): make enable to use tuple type as EmitsOptions (#2159) Fixed problem of this comment by 758119b
gharchive/pull-request
2020-09-19T00:41:55
2025-04-01T04:36:15.514250
{ "authors": [ "wonderful-panda" ], "repo": "vuejs/vue-next", "url": "https://github.com/vuejs/vue-next/pull/2160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
749375787
Does not register eagerly emitted event Describe the bug wrapper.emitted() does not register an emitted event if it was called by watch with immediate: true option Reproduction https://github.com/taigabrew/vue-test-utils-bug-48cb54ce System Info Operating System: macOS Catalina 10.15.7 Node version: v14.15.1 yarn version: 1.22.4 Installed vue-jest version 5.0.0-alpha.6 Installed @vue/test-utils 2.0.0-beta.10 I investigated this. I thought it might be related to setProps, but this works: it('works with watch immediate: true', async () => { const Comp = defineComponent({ props: { foo: { type: Number, default: 0, }, }, emits: ['update'], setup(props, { emit }) { watch(() => props.foo, val => { emit('update', val) }, { immediate: true }) }, }) const wrapper = mount(Comp) await wrapper.setProps({ foo: 1 }) console.log(wrapper.emitted()) // contains update with 1 as the payload }) But actually the problem is we are using a mixin that uses the beforeCreate lifecycle hook, which is AFTER setup: https://github.com/vuejs/vue-test-utils-next/blob/0b4f762790dc20e386ac69c8cb13f5bea43871f3/src/mount.ts#L401 If you do something like: setup(_, { emit }) { onMounted() { emit('update') } } It will capture it. That could be a good work around. I cannot think of a good way to capture events from setup right now 🤔 I am sure we have explored this before and I don't think there is any way to override emit in setup before the beforeCreated hook, so as far as I know this is not something we can currently solve. Another work-around would be wrapping your component and using a mock function: it.only('works with watch immediate: true', async () => { const Comp = defineComponent({ props: { foo: { type: Number, default: 0, }, }, emits: ['update'], setup(props, { emit }) { watch(() => props.foo, val => { emit('update', val) }, { immediate: true }) } }) const onUpdate = jest.fn() const Wrap = { render() { return h(Comp, { onUpdate }) } } const wrapper = mount(Wrap) // await wrapper.setProps({ foo: 1 }) expect(onUpdate).toHaveBeenCalled() }) I think this is probably the best option right now, unless we can find a way to override emit and capture events in setup (before the beforeCreate lifecycle is executed and we can start tracking emitted events). We might want to add this to the docs somewhere, I don't expect to solve this any time soon. At least there is a workaround. Ok. I understood. Thank you for fast reply! I think information about lifecycle hooks gotchas of setup use would help. For now I am stick with workaround. Once again, Thank you!
gharchive/issue
2020-11-24T06:10:52
2025-04-01T04:36:15.521683
{ "authors": [ "lmiller1990", "taigabrew" ], "repo": "vuejs/vue-test-utils-next", "url": "https://github.com/vuejs/vue-test-utils-next/issues/259", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1412907502
'WeakMap' is undefined in Vue 2.7.11 and above (IE10 and under) Version 2.7.11 Reproduction link github.com Steps to reproduce Open a Vue2 app using internet explorer 10 or below What is expected? App loads and 'Hello Vue!' is displayed What is actually happening? App does not load and 'WeakMap' is undefined is printed to the console WeakMap is not supported in Internet Explorer 10 or less and was introduced to Vue in version 2.7.11 as a workaround, use a weakmap polyfill: https://github.com/polygonplanet/weakmap-polyfill Thanks @posva - we provide a framework to customers with a fairly wide browser base, while we're happy to include this polyfill for our own testing purposes, this does seem like a change in browser support that some of our customers may also be feeling. Does the Vue team intend to continue supporting these legacy browsers for Vue2, or is there an intended change in browser support that we should be prepared for?
gharchive/issue
2022-10-18T09:47:01
2025-04-01T04:36:15.525441
{ "authors": [ "gingerbenw", "posva" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/12837", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
229554695
Vue 2.5.1 Component custom delimiter not working Version 2.5.1 Reproduction link Steps to reproduce webpack.base.config has the following alias 'vue$': 'vue/dist/vue.common.js', Running vue 2.5.1, inside a component do export default { delimiters: ['{', '}'], And inside the <template> tags i have <div> <a class="logo">{ Model.Name }</a> </div> What is expected? Should output models value What is actually happening? outputting { Model.Name } Delimiters can only be changed when using runtime compilation with the full build (vue.js). It does not work in *.vue files (to keep all *.vue files syntax consistent) Oh okay! thanks! haha yeah i realized the 2.5.1 was for the cli :upside_down_face: Also thanks for all your great work, i love vue! [[ Thanks ]] This is interesting to me. If you cannot change them in the .vue files you have inconsistency with applications that are harder to change. In Craft 3.0 I cannot see how to change the lexers globally. Even if I could find this it would likely break any Craft plugins that expect them to be the Twig default. I think the same argument could be made for the other applications, there may be reasons there not to change the Lexers. I think allowing .vue components to change their own delimiters would make sense, however, I do agree with the argument that they should not be changed globally for a Vue application for the same reason. IMHO the best way to do it is to allow the developer to choose when to change them, even on a component that is in a .vue file.
gharchive/issue
2017-05-18T04:53:44
2025-04-01T04:36:15.531175
{ "authors": [ "gregorskii", "marclave", "roboriaan", "yyx990803" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/5697", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
278546716
X-template in Internet Explorer 11/ or Microsoft Edge Version 2.5.0 Reproduction link https://jsfiddle.net/chrisvfritz/50wL7mdz/ Steps to reproduce I m using vue with laravel. And for inline template i use tag. What is expected? I expect to see the landing page component in IE/Edge. What is actually happening? I don't get any errors, i just get blank space where component should be. you seem to have posted the wrong fiddle - it doesnt contain any x-template. Oh sorry about that. https://jsfiddle.net/kobimilan995/Lfeyockj/ Thanks for the real link, that clears it up. you have to define templates outside of the element you mount your app to. https://jsfiddle.net/Lfeyockj/1/ Thanks for the reply, but unfortunately that doesn't seem to fix the issue. Oh well, that x-template is only for a component - you did not define any template for the main instance, so of course it's empty. Either you define the template inside of the mount element: https://jsfiddle.net/Lfeyockj/2/ or your define a template for the main instance: https://jsfiddle.net/Lfeyockj/3/ And by the way, your example didn't even contain a proper a script tag to Vue, so it couldn't run. Is it ok if i post a .blade.php file i have problem with here? It has 600 lines, but 580 are html, they are irrelevant. if you remove the 580 irrelevant lines. ;) It's not a runnable example but I'll take a look. First one is my blade file. Second is how i register the component. And third is the .vue file itself. Once again, it works as it should at chrome Why would you put the template outside of a .vue file? The whole point is to precompile it , which requires for the template to be in the .vue file. (But to be honest, that may still work somehow, I never tried.) Then you also use Vue.customElement(), which is a sepearate plugin we don't maintain. Then the main file still looks as if you have nested the template inside of the part of the page that will be controlled by the main instance. (I can't tell because you don't show how the main instance it set up, but it seems that way because the template is directly beneath the custom element) ..or you don't have a main instance because you use custom elements? where is the compiled javascript inserted? at the end of the page? Anyway, I can't debug this, this is not a basic setup. My guess would be that you can't use an x-template because of 1. and/or 2. Afterthought: did you add a polyfill for the custom elements? IE doesn't support those. Hmm. Didn't do that. I m putting it outside the .vue files because i need a lot of blade methods to use inside the template, and if i pass them all as a prop it could get messy. Can you maybe suggest me an alternative for better communication within blade files and vue components? You have no idea how much i appreciate your help! Polyfill for custom elements fixed the issue! Thank you very very much!
gharchive/issue
2017-12-01T17:58:22
2025-04-01T04:36:15.541639
{ "authors": [ "LinusBorg", "kobimilan995" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/7167", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
310218034
Could you please support using symbol value for ":key" field(when using v-for doing list rendering)? What problem does this feature solve? When I using "v-for" to render an list, every "instance" in the list has an "id", I would like to use a symbol type to express my "id" , and fill my "id" into ":key". Every thing is OK until my "id" has duplicate(Example: push same "instance" into list twice by mistake) In function "checkDuplicateKeys", when Vue detected duplicate key, it will log a warning, the warnning code is : "warn( ("Duplicate keys detected: '" key "'. This may cause an update error."), vnode.context );" When the file "key" is symbol type, the browser will throw an error because jointing a symbol value with string, and then byebye my application. I check the document, It didn't tell us that we can use an symbol into key, so I think it is not a bug. So, could you please support symbol type in list rendering? What does the proposed API look like? It works fine: http://jsfiddle.net/jgbsjoxs/ Please, next time consider using the forum, the Discord server or StackOverflow for questions first. But feel free to come back and open an issue if it turns out to be a bug 🙂 It was also not clear for me that symbols as keys are supported. Untill I searched github and found it's explicitly supported since v2.5.12. I think documentation should be updated and say that key can be number | string | symbol. Also console warning [Vue warn]: Avoid using non-primitive value as key, use string/number value instead. should be changed to [Vue warn]: Avoid using non-primitive value as key, use string/number/symbol value instead. Then it will be more obvious that this is supported, and not just "happens to work by accident". @mpawelski could you open a pull request at vuejs/vuejs.org please if there isn't one already?
gharchive/issue
2018-03-31T06:49:32
2025-04-01T04:36:15.548504
{ "authors": [ "gitby15", "mpawelski", "posva" ], "repo": "vuejs/vue", "url": "https://github.com/vuejs/vue/issues/7936", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
359173649
Vulkano.rs graphics pipeline guide does not compile So following the guide posted here When coming to the "drawing" part and setting up the command buffer that will be submit. The program fails to compile with following error: error[E0277]: the trait bound `vulkano::pipeline::vertex::SingleBufferDefinition<Vertex>: vulkano::pipeline::vertex::VertexSource<std::sync::Arc<vulkano::buffer::CpuAccessibleBuffer<[Vertex; 3]>>>` is not satisfied --> src/main.rs:394:6 | 394 | .draw(pipeline.clone(), &state, vertex_buffer.clone(), (), ()).unwrap() | ^^^^ the trait `vulkano::pipeline::vertex::VertexSource<std::sync::Arc<vulkano::buffer::CpuAccessibleBuffer<[Vertex; 3]>>>` is not implemented for `vulkano::pipeline::vertex::SingleBufferDefinition<Vertex>` | = help: the following implementations were found: <vulkano::pipeline::vertex::SingleBufferDefinition<V> as vulkano::pipeline::vertex::VertexSource<B>> <vulkano::pipeline::vertex::SingleBufferDefinition<V> as vulkano::pipeline::vertex::VertexSource<std::vec::Vec<std::sync::Arc<vulkano::buffer::BufferAccess + std::marker::Send + std::marker::Sync + 'static>>>> = note: required because of the requirements on the impl of `vulkano::pipeline::vertex::VertexSource<std::sync::Arc<vulkano::buffer::CpuAccessibleBuffer<[Vertex; 3]>>>` for `vulkano::pipeline::GraphicsPipeline<vulkano::pipeline::vertex::SingleBufferDefinition<Vertex>, std::boxed::Box<vulkano::descriptor::PipelineLayoutAbstract + std::marker::Send + std::marker::Sync>, std::sync::Arc<vulkano::framebuffer::RenderPass<setup_graphics_pipeline::scope::CustomRenderPassDesc>>>` = note: required because of the requirements on the impl of `vulkano::pipeline::vertex::VertexSource<std::sync::Arc<vulkano::buffer::CpuAccessibleBuffer<[Vertex; 3]>>>` for `std::sync::Arc<vulkano::pipeline::GraphicsPipeline<vulkano::pipeline::vertex::SingleBufferDefinition<Vertex>, std::boxed::Box<vulkano::descriptor::PipelineLayoutAbstract + std::marker::Send + std::marker::Sync>, std::sync::Arc<vulkano::framebuffer::RenderPass<setup_graphics_pipeline::scope::CustomRenderPassDesc>>>>` I'm quite new to rust so still practicing deciphering these kinds of compile errors. Although it seems like something might be wrong with the vertex implementation or pipeline creation: #[derive(Copy,Clone)] pub struct Vertex{ position : [f32;3] } impl_vertex!(Vertex, position); let pipeline = Arc::new( GraphicsPipeline::start() .vertex_input_single_buffer::<Vertex>() .vertex_shader(vs_shader.main_entry_point(), ()) .triangle_list() .viewports_dynamic_scissors_irrelevant(1) .fragment_shader(fs_shader.main_entry_point(),()) .render_pass(Subpass::from(render_pass.clone(),0).unwrap()) .build(device.clone()).unwrap() ); Any ideas? I've narrowed it down to the way of creating the CpuAccessibleBuffer for the vertex buffer. let vertices = [ Vertex{position: [-0.5,-0.5,0.0]}, Vertex{position: [0.0,0.5,0.0]}, Vertex{position: [0.5,0.25,0.0]}, ]; let vertex_buffer = CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::vertex_buffer(), vertices).unwrap(); // doesn't compile let vertex_buffer = CpuAccessibleBuffer::from_iter(device.clone(), BufferUsage::vertex_buffer(), vertices.iter().cloned()).unwrap(); // compiles correctly Hah, you found the issue faster than I could explain it :) In a nutshell, the problem is that stack-allocated arrays are not very well supported by Rust's current abstraction vocabulary (traits and friends). If you replace vertices by a Vec (which you can do by putting a vec! before the opening square bracket in the definition of vertices), the code should compile. The main reason why arrays are a bit clumsy at the moment is that supporting them correctly requires an upcoming language feature which has not landed yet, namely const generics. The draw call wants a CpuAccessibleBuffer<[Vertex]>, whereas you are trying to feed it with a CpuAccessibleBuffer<[Vertex; 3]>. Arrays and slices are not the same type in Rust, which is why this does not compile. I think the from_data constructor is meant to be used for "scalar" data like uniforms, not for arrays of vertices. Am I wrong in saying that following code should compile then: let v = Vertex{position: [-0.5,-0.5,0.0]}; let vertex_buffer = CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::vertex_buffer(), v).unwrap(); I can not get this to compile either though... @jonathansty The code you posted does not work, because it will produce a CpuAccessibleBuffer<Vertex> whereas you want a CpuAccessibleBuffer<[Vertex]> (notice the brackets: a slice with one element is not the same as a single object in Rust). So if I understand correctly from_data would be used to create buffers that can be bound as uniforms using descriptor sets and such? I think I understand the use case and difference between from_data and from_iter. Sorry if these questions were a bit annoying I just got confused. In C and C++ this would use the same function for both cases. I suggest adding an example that uses the from_data buffer to initialize a uniform buffer. Go ahead and close this issue. I wouldn't create an example solely to demonstrate from_data. hopefully we get an example in the future that uses it naturally as part of a larger example. This PR should add one: https://github.com/vulkano-rs/vulkano-examples/pull/24/files
gharchive/issue
2018-09-11T18:47:49
2025-04-01T04:36:15.662281
{ "authors": [ "HadrienG2", "jonathansty", "rukai" ], "repo": "vulkano-rs/vulkano", "url": "https://github.com/vulkano-rs/vulkano/issues/1036", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2543345487
[BUG] Cannot save tokens Describe the bug Does not save tokens on selection from the list. To Reproduce ETH -> Choose tokens search token -> choose -> save doesn't save I tried this , but can't reproduce it Can't reproduce it , closing
gharchive/issue
2024-09-23T18:23:51
2025-04-01T04:36:15.664844
{ "authors": [ "Rockindash", "johnnyluo" ], "repo": "vultisig/vultisig-ios", "url": "https://github.com/vultisig/vultisig-ios/issues/1192", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1955501854
🛑 Ecoindex.fr Backend API is down In 8e70efb, Ecoindex.fr Backend API (https://ecoindex.p.rapidapi.com/health) was down: HTTP code: 0 Response time: 0 ms Resolved: Ecoindex.fr Backend API is back up in 5eb29fc after 7 minutes.
gharchive/issue
2023-10-21T13:54:34
2025-04-01T04:36:15.669879
{ "authors": [ "vvatelot" ], "repo": "vvatelot/upptime", "url": "https://github.com/vvatelot/upptime/issues/136", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1532737024
Embed hosted stream's chat Added option to embed a hosted stream's chat, same as embedded stream or live YT chat. The order of priority for what chat to show is embed, live, then host. I have no idea if there's a case where something could be hosted while the stream is live, but I believe the embed chat button will favor the live YT chat in this scenario. Testing I've tested with a YouTube host, and tried to mock a Twitch host. I got it working with a mocked Twitch host, but I was manually editing localStorage, and stopping it from being updated with breakpoints. So I'm mostly confident, but could be wrong. But can't really force a Twitch embed to test... In Chrome Dev tools replace the dggApi:hosting localStorage value with something like this: {"id":"maya","platform":"twitch","displayName":"maya","url":"https://www.twitch.tv/maya"} Editing this, on its own, won't actually load the hosted stream, but will allow the Embed Chat button to work. Also the value might be overridden by vanilla DGG code and have to be re-edited. Thank you! :heart_eyes:
gharchive/pull-request
2023-01-13T18:26:29
2025-04-01T04:36:15.679249
{ "authors": [ "mattroseman", "vyneer" ], "repo": "vyneer/dgg-chat-gui-scripts", "url": "https://github.com/vyneer/dgg-chat-gui-scripts/pull/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
487002048
Added support for pipenv and non-standard python configurations This pull-request makes this project compatible with pipenv. It can be run with pipenv run ./vyos-build-ami "<image_uri>" I'm not sure if the changes will break regular usage, but they shouldn't as I believe it will default to use the default PATH python, instead of /bin/python (which was breaking for me on Mac OS) Actually, this forces Python version 3.7. Removing the last section in the Pipfile should fix those problems. Thanks! Looks good at a glance, I'll try it. Have you made a working AMI by the way? I've fixed the readme to mention the "make AWS" target required for the ISO build to include the EC2 initialization stuff. That cloud decoupling is a relatively recent change, so the readme was out of sync with reality and lying by omission! Also, there's a missing dep. I'll send another pull-request in a few minutes, as soon as I'm sure it's working. Have you made a working AMI by the way? I've fixed the readme to mention the "make AWS" target required for the ISO build to include the EC2 initialization stuff. Yes, I've modified the playbooks to use debian scratch (and necessarily overlay instead of aufs) but I still haven't confirmed it's working. I'm hoping to have it working before the end of the day.
gharchive/pull-request
2019-08-29T14:42:49
2025-04-01T04:36:15.682918
{ "authors": [ "dmbaturin", "tiagoad" ], "repo": "vyos/build-ami", "url": "https://github.com/vyos/build-ami/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1473230129
Guidelines adjustments for better accommodate research grant applications Split the requested information for a grant application into two kinds of projects: software development and research. Add Google Scholar profiles for research grants. Add Research as a category of project in the guidelines. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
gharchive/pull-request
2022-12-02T17:52:27
2025-04-01T04:36:16.155636
{ "authors": [ "CLAassistant", "dsm-w3f" ], "repo": "w3f/Grants-Program", "url": "https://github.com/w3f/Grants-Program/pull/1329", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1227903154
relation graph Project Abstract Please provide a brief description of your project here summarising key points (1-2 paragraphs). If your application is a follow-up to a previous grant, please mention which one in the first line of the abstract and include a link to previous pull requests if applicable. For which grant level are you applying? [ ] Level 1: Up to $10,000, 2 approvals [x] Level 2: Up to $50,000, 3 approvals [ ] Level 3: Unlimited, 5 approvals (for > $100k Web3 Foundation Council approval) Application Checklist [x] The application template has been copied, renamed ( project_name.md) and updated. [x] A BTC or Ethereum (DAI/USDT) address for the payment of the milestones is provided inside the application. [x] I have read and acknowledged the terms and conditions. [x] The software delivered for this grant will be released under an open-source license specified in the application. [x] The initial PR contains only one commit (squash and force-push if needed). [x] The grant will only be announced once the first milestone has been accepted. How Did You Hear About our grants program? [x] Social Media [ ] Hackathon [ ] Personal Recommendation [ ] Substrate Builders Program [ ] Investor/VC [ ] Online Search [ ] Other: _______ Hey @relationlabs. Thanks for the PR. Is this an updated version of your previous PR (#920), or why did you submit a new one? Hey @relationlabs. Thanks for the PR. Is this an updated version of your previous PR (#920), or why did you submit a new one? Yes.It is an updated version of PR(#920). I Saw that only committee members can approve applications. so I closed that PR. Thanks for the info. It wouldn't have been necessary to create a new PR, since Github simply ignores approvals from users outside the W3F grants team. Picking up @Noc2's comment from the previous PR, are you aware of any similar projects, and if so, how yours differs from the existing ones? I've browsed through the projects in applications and haven't found anything similar Thanks for the updates, @relationlabs. One last request: could you add an article to at least milestone 2 or 3, targeted at potential users, that explains the project and how to use it? Something slightly less technical than a testing guide, ideally. Our application template suggests: | 0e. | Article | We will publish an article/workshop that explains [...] (what was done/achieved as part of the grant). (Content, language and medium should reflect your target audience described above.) Thanks for the updates, @relationlabs. One last request: could you add an article to at least milestone 2 or 3, targeted at potential users, that explains the project and how to use it? Something slightly less technical than a testing guide, ideally. Our application template suggests: | 0e. | Article | We will publish an article/workshop that explains [...] (what was done/achieved as part of the grant). (Content, language and medium should reflect your target audience described above.) Thank you for your advice. I have updated the milestone 2. Will this interface to RocksDB crate ? 这个接口会与 RocksDB 板条箱相连吗? not at all,we will build graphdb by storage model on substrate. Thanks for the updated application. Just to double check, with your first and second milestone you won’t only deliver the wasm package, but you will also deliver the actual pallet as open source code, correct? Could you update this in the application? And regarding the last application, could you provide more information about the demo (programming language, ui or CLI etc.) and add this to the table? Thanks for your question.We add the open source code on milestone3. We add open source code in milestone3. Thanks for the updated application. Just to double check, with your first and second milestone you won’t only deliver the wasm package, but you will also deliver the actual pallet as open source code, correct? Could you update this in the application? And regarding the last application, could you provide more information about the demo (programming language, ui or CLI etc.) and add this to the table? We add open source code in milestone3. The demo in milestone 3 is built by js+rust.And it will contains simple UI. We have updated those content. This grant is being terminated due to inactivity: https://github.com/w3f/Grant-Milestone-Delivery/pull/488#issuecomment-1204958685
gharchive/pull-request
2022-05-06T13:56:26
2025-04-01T04:36:16.168858
{ "authors": [ "alxs", "hakan-w3f", "relationlabs", "semuelle", "uukais" ], "repo": "w3f/Grants-Program", "url": "https://github.com/w3f/Grants-Program/pull/928", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
26872538
Let empty target cells with no data The documentation states that the optional property empty of the legendColors sets the color for dates with value == 0. This is confusing because the documentation also states that the property name empty of the subDomainTitleFormat targets dates without a value (so value == null). So does empty target value == 0 or value == null? I would encourage the latter option. In the case of the first option, there is a value which can be colored with the default legend and it makes more sense to do so. Furthermore, there is currently no way to color dates where value == null, which happens to be what I'm currently trying to accomplish. I updated the code, but I get some failing unit tests. (I also get these before my modifications, so I assume I can ignore them). Using cal-heatmap again and ran into the same issue. Not all dates are included in my calendar, and for those dates, I want to specify their color using the empty option on the legendColors parameter. However, this does not work. The fix in this pull request solves the issue. To make it more concrete, my calendar definition looks like this: yearcal = new CalHeatMap(); yearcal.init({ domain: "month", subDomain: "day", itemName: ['kilometer', 'kilometers'], domainGutter: 5, displayLegend: false, cellSize: 14, legendColors: { min: "#DAE289", max: "#3B6427", empty: "#dddddd" }, tooltip: true, itemSelector: "#year-chart", previousSelector: "#previous-month", nextSelector: "#next-month", start: new Date(timestampFirstDate * 1000), range: 12, data: yeardata }); If I remove the entire legendColors attribute, the calendar is rendered properly. Cells with no value are grey, cells with data are colored using the default greenish legend colors. But I want to modify the grey (because I need a little bit more contrast with my background, so I target that color using the legendColors attribute. The problem is, cells with no value (neither the timestamp nor the value is present in the yeardata object), are now colored "#DAE289" (the color set by min). It looks like these cells are treated as value 0, although considerMissingDataAsZero is not set to true. Setting this value explicitly to false does not solve the issue. The problem is this line: if (d.v === 0 && options.legendColors !== null && options.legendColors.hasOwnProperty("empty")) { This says that if the value is 0 and legendColors are specified and the empty attribute is given, the color of the cell needs to be set to the value given with empty. It explains why my calendar renders correctly when I don't set the legendColors attribute. But the if statement is not correct. The empty flag should target cells where the value is null not where it is 0. So this should be: if (d.v === null && options.legendColors !== null && options.legendColors.hasOwnProperty("empty")) { And that is what this pull request does. @kamisama could you please accept this change or tell me what I'm doing wrong? Hi back, It took me some time to review the code, since I haven't touched it for months. The fix seems logic, can you please edit the file in /src instead ? The files in the root are built by a grunt task. Thanks a lot! Will do so soon. What happens when the value is null, but empty is not set ? See #117 When legendColors is set, but it had no attribute empty, a cell with value null is colored according to the color set in legendColors.min. This also happens on the current master version of cal-heatmap (I mean, that issue existed before my fix). I'll see if I can add a fix for that too. While testing, I also notice that the base attribute is not working as advertised. For some reason, it gets overridden by the min color of legendColors. So if I define legendColors with a min, max and base color, then cells with value null should be colored according to base and cells with value 0 should be colored according to min, right? As far as I can see it now, everything is colored according to min. What kind of cells should be colored by base? As far as I can see, base is not really doing anything. It gets overridden by min. Furthermore, with my proposition to let the empty attribute target cells where value === null, there is no way to target cells where value === 0 which could be what other users are looking for. I propose the following: the label empty will be replaced by the label zero and will target cells where the value is 0. the label base will target cells where there is no data (value === null). Could you agree with that? I agree that the legendColors option is a mess. It was added in order to re-color the entire calendar without any css changes, but there's some naming issue there. Point 1 That makes more sense. Point 2 base is the base color of the calendar (either because there is no data, or because they're not fetched yet). It's the default color for cells with value == null, and its goal is to be overridden by other color. I think that is already the case now. Can you revert the changes in the 2 js files in the root folder, and only keep the edit in the js file located in the src folder ? Files in the root are automatically built, and there's a conflict preventing the merge. @kamisama done @bartaelterman Just wanted to say a quick thank you for sorting this out. Was driving me mad that base and empty weren't doing anything, let alone what they were supposed to do. Also, the docs are still referencing version 3.3.10 in the Installation section. That threw me for a loop as well.
gharchive/pull-request
2014-02-04T10:40:01
2025-04-01T04:36:16.202416
{ "authors": [ "bartaelterman", "joshuapinter", "kamisama" ], "repo": "wa0x6e/cal-heatmap", "url": "https://github.com/wa0x6e/cal-heatmap/pull/74", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
419145583
Clone error git clone [email protected]:wadewegner/salesforce-cli-zsh-completion.git Cloning into 'salesforce-cli-zsh-completion'... Warning: Permanently added the RSA host key for IP address '192.30.253.112' to the list of known hosts. [email protected]: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Update URL: https://github.com/wadewegner/salesforce-cli-zsh-completion .
gharchive/issue
2019-03-10T04:32:17
2025-04-01T04:36:16.221986
{ "authors": [ "chandra2ravi" ], "repo": "wadewegner/salesforce-cli-zsh-completion", "url": "https://github.com/wadewegner/salesforce-cli-zsh-completion/issues/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
137771376
当日行情分钟数据有缺失 我试了30分钟的例子,ts.get_hist_data('600848', ktype='30') , 得到结果如下 open high close low volume price_change p_change ma5 ma10 ma20 v_ma5 v_ma10 v_ma20 turnover date 2016-03-02 11:30:00 6.54 6.62 6.58 6.52 34292.7 0.04 0.61 6.370 6.304 6.0870 55352.8 81753.7 81881.2 0.12 2016-03-02 11:00:00 6.60 6.62 6.54 6.53 52267.8 -0.06 -0.91 6.300 6.247 6.0510 59231.2 91573.4 82820.8 0.18 2016-03-01 15:00:00 6.27 6.27 6.23 6.20 88895.5 -0.04 -0.64 6.250 6.190 6.0190 58220.0 94948.6 82318.2 0.30 2016-03-01 14:30:00 6.25 6.27 6.26 6.19 45095.3 0.01 0.16 6.254 6.155 6.0000 68818.1 91068.5 84298.6 0.15 2016-03-01 14:00:00 6.20 6.29 6.24 6.20 56212.9 0.04 0.65 6.248 6.126 5.9830 88931.5 91852.8 86555.7 0.19 2016-03-01 13:30:00 6.30 6.31 6.23 6.17 53684.3 -0.07 -1.11 6.238 6.104 5.9560 108154.0 95359.9 87933.6 0.18 2016-03-01 11:30:00 6.25 6.33 6.29 6.23 47212.2 0.04 0.64 6.194 6.079 5.9210 123916.0 103554.0 88923.1 0.16 前一天的数据没有问题,但当天的数据有缺失,少了10:00和10:30的两条记录。后来我又试了60分钟和5分钟,也有类似问题,请问这种问题可以解决吗?谢谢 现在应该有了。这个应该是凤凰网的问题。 @RobotJiang 你的微信号是哪个?能否在微信里给我发条消息
gharchive/issue
2016-03-02T04:41:09
2025-04-01T04:36:16.225342
{ "authors": [ "RobotJiang", "jimmysoa", "zhenyiy" ], "repo": "waditu/tushare", "url": "https://github.com/waditu/tushare/issues/99", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
802264980
Update tutorial.rst Thanks for contributing to Wagtail! 🎉 Before submitting, please review the contributor guidelines https://docs.wagtail.io/en/latest/contributing/index.html and check the following: Do the tests still pass? (https://docs.wagtail.io/en/latest/contributing/developing.html#testing) Does the code comply with the style guide? (Run make lint from the Wagtail root) For Python changes: Have you added tests to cover the new/fixed behaviour? For front-end changes: Did you test on all of Wagtail’s supported browsers? Please list the exact versions you tested. For new features: Has the documentation been updated accordingly? Updated in #6797
gharchive/pull-request
2021-02-05T15:39:28
2025-04-01T04:36:16.236117
{ "authors": [ "larathompson", "nmorduch" ], "repo": "wagtail/wagtail", "url": "https://github.com/wagtail/wagtail/pull/6795", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
107846294
Error using Oracle on windows With ST3 with GoOracle on windows, when I try to issue any command with Oracle, I get the following error: Running oracle callees command... 'export' is not recognized as an internal or external command, operable program or batch file. It looks like the "export" command on line 107 of goOracle.py is not a valid windows command for updating the path. I believe you need to use the 'set' command for Windows. (unless I'm missing some configuration for ST3 that is causing Oracle to not work) Even fixing the above, I get the following error next: Traceback (most recent call last): File "goOracle in C:\Users\kyrra\AppData\Roaming\Sublime Text 3\Installed Packages\GoOracle.sublime-package", line 46, in on_done File "goOracle in C:\Users\kyrra\AppData\Roaming\Sublime Text 3\Installed Packages\GoOracle.sublime-package", line 108, in oracle TypeError: 'NoneType' object is not subscriptable I got this fully working for Windows (with breaking linux/osx). I'll see if I can this working for both and submit a pull request. In-case I don't the main change is in the 'def oracle' for setting up cmd: cmd = "set GOPATH=%(go_path)s&&set PATH=%(path)s&&oracle -pos=%(file_path)s:%(pos)s -format=%(output_format)s %(mode)s %(scope)s" % { how you set environment variables, separate commands in the same line, and separate PATH variables differ in Windows. The env variable setup in User.sublime-settings for windows needs to differ a bit as well. For example: "env": { "GOPATH": "D:\\go", "PATH": "%PATH%;%GOPATH%\\bin" },
gharchive/issue
2015-09-23T04:09:33
2025-04-01T04:36:16.240392
{ "authors": [ "jwendel" ], "repo": "waigani/GoOracle", "url": "https://github.com/waigani/GoOracle/issues/22", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2380375111
🛑 cyconet.org is down In 397de8b, cyconet.org (https://www.cyconet.org) was down: HTTP code: 0 Response time: 0 ms Resolved: cyconet.org is back up in bd6894d after 58 minutes.
gharchive/issue
2024-06-28T12:54:28
2025-04-01T04:36:16.254122
{ "authors": [ "waja" ], "repo": "waja/cyconet-upptime", "url": "https://github.com/waja/cyconet-upptime/issues/2013", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2317052231
Device signature verification failing Hello, I am trying to verify device signature on the separate device from which mdoc presentation was created, I am getting some weird behaviour so I would appreciate any information or clarification on the topic: this is the code I am trying to run: val my_credential = "cbor hex in attached file" val my_mdoc = MDoc.fromCBORHex(my_credential) println(my_credential.length) val dummy_request = MDocRequestBuilder("org.iso.18013.5.1.mDL") .addDataElementRequest("org.iso.18013.5.1", "portrait", false) .addDataElementRequest("org.iso.18013.5.1", "age_over_18", false) .build() val session_elements: List<AnyDataElement> = listOf(StringElement("some_value"), StringElement("some_value_2")) val sessionTranscript = ListElement(session_elements) val device_auth = DeviceAuthentication( sessionTranscript, "org.iso.18013.5.1.mDL", dummy_request.decodedItemsRequest.nameSpaces.toEncodedCBORElement() ) println("Device auth: " + device_auth.toDE().toCBORHex()) val device_jwk = JWKKey.generate(KeyType.secp256r1, JWKKeyMetadata()) val device_key = ECKey.parse(device_jwk.exportJWK()) val cryptoProvider_device = SimpleCOSECryptoProvider( listOf( //COSECryptoProviderKeyInfo("READER_KEY_ID", AlgorithmID.ECDSA_256, certChain.first().publicKey, x5Chain = certChain, trustedRootCAs = listOf(rootCaCertificate!!)), COSECryptoProviderKeyInfo("DEVICE_KEY_ID", AlgorithmID.ECDSA_256, device_key!!.toECPublicKey(), device_key!!.toECPrivateKey(), x5Chain = listOf()), ) ) val presentation = my_mdoc .presentWithDeviceSignature( dummy_request, device_auth, cryptoProvider_device, "DEVICE_KEY_ID") println("Requested items: " + dummy_request.decodedItemsRequest.nameSpaces.toCBORHex()) println("Presentation requested items: " + presentation.deviceSigned!!.nameSpaces.toCBORHex()) // From this point we simulate verifier behaviour (separate device; can only infer requested items from the presentation) val device_auth_verifier_side = DeviceAuthentication( sessionTranscript, "org.iso.18013.5.1.mDL", presentation.deviceSigned!!.nameSpaces.toEncodedCBORElement(), ) val presentation_chain = presentation.issuerSigned.issuerAuth!!.x5Chain!! val pres_chain = CertificateFactory.getInstance("X509").generateCertificates( ByteArrayInputStream(presentation_chain) ).map { it as X509Certificate } val cryptoProvider_reader = SimpleCOSECryptoProvider(listOf( //COSECryptoProviderKeyInfo("ISSUER_KEY_ID", AlgorithmID.ECDSA_256, pres_chain.first().publicKey, x5Chain = pres_chain, trustedRootCAs = listOf(pres_chain.last())), COSECryptoProviderKeyInfo("ISSUER_KEY_ID", AlgorithmID.ECDSA_256, pres_chain.first().publicKey, x5Chain = pres_chain, trustedRootCAs = listOf(pres_chain.last())), COSECryptoProviderKeyInfo("DEVICE_KEY_ID", AlgorithmID.ECDSA_256, device_key.toECPublicKey()) )) val device_signature_verified = presentation.verifyDeviceSignature(device_auth_verifier_side, cryptoProvider_reader, "DEVICE_KEY_ID") println("Device signature valid: " + device_signature_verified.toString()) val device_auth_with_dummy_request = DeviceAuthentication( sessionTranscript, "org.iso.18013.5.1.mDL", dummy_request.decodedItemsRequest.nameSpaces.toEncodedCBORElement() ) val device_signature_verified_dummy = presentation.verifyDeviceSignature(device_auth_with_dummy_request, cryptoProvider_reader, "DEVICE_KEY_ID") println("Device signature valid: " + device_signature_verified_dummy.toString()) Resulting output: May 25, 2024 3:51:42 PM MDocPresentationIssueTestingKt main INFO: Device auth: 847444657669636541757468656e7469636174696f6e826a736f6d655f76616c75656c736f6d655f76616c75655f32756f72672e69736f2e31383031332e352e312e6d444cd818582ba1716f72672e69736f2e31383031332e352e31a268706f727472616974f46b6167655f6f7665725f3138f4 May 25, 2024 3:51:42 PM MDocPresentationIssueTestingKt main INFO: Requested items: a1716f72672e69736f2e31383031332e352e31a268706f727472616974f46b6167655f6f7665725f3138f4 May 25, 2024 3:51:42 PM MDocPresentationIssueTestingKt main INFO: Presentation requested items: d81841a0 May 25, 2024 3:51:42 PM MDocPresentationIssueTestingKt main INFO: Device signature valid: false May 25, 2024 3:51:42 PM MDocPresentationIssueTestingKt main INFO: Device signature valid: true Expected output: Device signature valid for both verification cases This behaviour occurs because function MDoc.presentWithDeviceSignature uses empty map when creating DeviceSigned element. Since mDL device can generate verifiable presentation of mDL with any subset of data elements, verifier can't know which subset to use to verify the presentation or would have to try to verify presentation with all possible subsets of data elements In my local testing I replaced "mapOf()" with "mDocRequest.decodedItemsRequest.nameSpaces" which works fine for my use case. Is this a bug or am I missing something? cbor_hex_credential.txt Hi, I found the problem in your code: In the device_auth_verifier_side you add the namespaces from the device-signed element. However, the device signed part doesn't contain the the document data, which was requested with the the MDocRequest. Instead, the device-signed object is like a proof of possession, where the device proves that it has ownership of the device key, which the document was issued for. The requested document data, is in the issuer-signed object. As the data is e.g. personal data, that has been confirmed and signed by the document issuer. When the device makes the presentation for the given MDocRequest, it may selectively choose the data to include in the presentation, so it filters the issuer-signed object according to the requested data items. (In your case "portrait" and "age_over_18") Also with regards to your statement: "// From this point we simulate verifier behaviour (separate device; can only infer requested items from the presentation)" I don' t quite agree with this assessment, as the document request was probably originally created by the verifier service and passed on to the device as a QR code or URL or likewise. So you can assume, that the verifier knows which namespaces and data fields it requested from the device, such that it can assemble the device authentication structure correctly. Anyway, the short summary is: The first verification fails, because the namespaces in the device authentication structure are taken from the device-signed object, which is a completely different type of object. You could take the fields from the issuer-signed object of the presentation instead, but I think the verifier should actually have the mdoc request, so that it can do it like in your second verification step, which succeeds in your case. Hi and thank you for your time, with regards to your statement: "So you can assume, that the verifier knows which namespaces and data fields it requested from the device, such that it can assemble the device authentication structure correctly." I agree partially, this would be the case if the device can only approve or decline request, but if the device decides to provide only a subset of requested data (selective disclosure), then the verifier would not know which data exactly it would need to assemble the device authentication structure. I didn't think about extracting the fields from the issuer-signed object, thank you for the hint. Additionally I would like to discuss this statement: "However, the device signed part doesn't contain the the document data, which was requested with the the MDocRequest. Instead, the device-signed object is like a proof of possession, where the device proves that it has ownership of the device key, which the document was issued for." If I am understanding you correctly, you are stating that device-signed object should not contain any information regarding nameSpaces and data items included in the presentation, this goes against my understanding of DeviceSigned structure so if you could clarify some things that would be awesome: what is the point of having nameSpaces field in the structure if it will always be empty is this alligned with the current mDL standard, I only have access to one of the draft versions which defines DeviceSigned structure as such: @severinstampler after reading through specification again and with the help of your comment I was able to understand where I went wrong and what is DeviceNameSpaces used for. Thank you for the help and clarification. Resolved
gharchive/issue
2024-05-25T14:16:24
2025-04-01T04:36:16.338248
{ "authors": [ "hrvoje459", "severinstampler" ], "repo": "walt-id/waltid-identity", "url": "https://github.com/walt-id/waltid-identity/issues/420", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2281418713
Sharing doesn't work? : Error! Error: Internal Server Error: Unable to locate credentials Installed on Ubuntu 22 server. Everything seem to be working fine. The only change i've made is I've changed the 127.0.0.1 to 10.10.10.200 IP in main.py file Everything loads and works fine. But when I try to share a result I see this: and when i visit the link, i see an empty UI with this error: Sharing only happens in fly.io currently. It requires an S3 compatible storage configuration. I can add instructions on how to configure this for self hosting. please make guide for fly can u show how to deploy on fly? Sorry @rossman22590 been swamped with various things. There's a fly.toml file in the backend directory. You'll need to modify that to be the domain you end up using on fly and create a github client_id and client_secret if you want to have github login. Then you'll need to set any secrets for api providers you want to use using flyctl, i.e. flyctl secrets set OPENAI_API_KEY=xxxx To deploy, you can run npm run deploy from the frontend directory, that will build for the hosted version. I might have missed something but that's the general idea.
gharchive/issue
2024-05-06T17:50:54
2025-04-01T04:36:16.351413
{ "authors": [ "ahakobyan79", "rossman22590", "vanpelt" ], "repo": "wandb/openui", "url": "https://github.com/wandb/openui/issues/86", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1808479280
chore(sweeps): resume sweeps from the cli Fixes https://wandb.atlassian.net/browse/WB-14553 Description Allows resuming sweeps from cancelled or finished states, which is already allowed in the UI. Further softens the restrictions on sweep state updates, which are currently very strict in the SDK. Only case that might be concerning is the case where we resume a truly finished grid sweep, but this works as expected, with the agent launching 0 new runs: Successfully resumed a stopped random sweep and continued logging runs. Similar outcome for bayesian sweeps Resuming finished grid sweep and launching new agent resulted 0 new runs, the expected behavior Canceling sweeps from the command line successful for all sweep methods Pausing then resuming sweeps from command line successful for all sweep methods Paused waiting for new run Resumed - sweep picks up where it left off
gharchive/pull-request
2023-07-17T19:41:36
2025-04-01T04:36:16.356600
{ "authors": [ "MBakirWB", "gtarpenning" ], "repo": "wandb/wandb", "url": "https://github.com/wandb/wandb/pull/5901", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
724402331
Datasets links? After reading your repo , I cant find the datasets links? The Datasets in the repo is self. Thanks a lot https://drive.google.com/a/usc.edu/uc?id=1f4k8zhomFuZt820bN8gl3zbw8lyvcxg0&export=download
gharchive/issue
2020-10-19T08:37:21
2025-04-01T04:36:16.362859
{ "authors": [ "wangby511", "xuhui1994" ], "repo": "wangby511/Extreme-Dark-Video-Enhancement", "url": "https://github.com/wangby511/Extreme-Dark-Video-Enhancement/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
307986249
word复制粘贴到编辑器出现的问题 从word排版好的样式 复制进入编辑器,通过text.html 获取内容发现会携带 <w:LsdException>类似这样的标签,实际这些标签对样式没有作用 能否获取的时候把这种非html的标签去除掉,否则保存入库后,后续处理很麻烦 你可以先通过editor.customConfig.pasteTextHandle自己处理一下 https://www.kancloud.cn/wangfupeng/wangeditor3/448202 ,后续编辑器会针对 word excel 的粘贴做统一处理。
gharchive/issue
2018-03-23T11:03:40
2025-04-01T04:36:16.368168
{ "authors": [ "wangfupeng1988", "zsj1029" ], "repo": "wangfupeng1988/wangEditor", "url": "https://github.com/wangfupeng1988/wangEditor/issues/1416", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2759602568
http和https协议下的粘贴行为不一致 https协议下的粘贴行为受createNewNodeBehavior选项影响 而http协议下不受 已修改,v0.13.0+生效
gharchive/issue
2024-12-26T10:34:02
2025-04-01T04:36:16.378494
{ "authors": [ "wanglin2" ], "repo": "wanglin2/mind-map", "url": "https://github.com/wanglin2/mind-map/issues/1065", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1852452928
feat: 更老版本的xmind文件导入兼容 例如:https://github.com/ckjbug/XMind-Learning/blob/master/ABP框架的学习路线图-群友版.xmind 这个不是支持的么 这个不是支持的么 举例举得有问题 : https://github.com/ckjbug/XMind-Learning/blob/master/阿里云盾.xmind 这个也可以啊,你在demo试过吗
gharchive/issue
2023-08-16T03:42:29
2025-04-01T04:36:16.380986
{ "authors": [ "Xbs233", "wanglin2" ], "repo": "wanglin2/mind-map", "url": "https://github.com/wanglin2/mind-map/issues/273", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
626401915
在Home使用,退到后台一段时间再进入App,CGContextSetFillColorWithColor处报错EXC_BAD_ACCESS 在Home页使用WMPageController报的错,原因是由于WMProgressView中的color为CGColorRef,assign弱引用,当退到后台一段时间,再进入App,drawRect重绘的时候,color只有一个空地址,从而引发CGContextSetFillColorWithColor错误(EXC_BAD_ACCESS)。 将WMProgressView 的 CGColorRef color 替换为 UIColor *color进行处理就行了。
gharchive/issue
2020-05-28T10:33:52
2025-04-01T04:36:16.382307
{ "authors": [ "Jisen" ], "repo": "wangmchn/WMPageController", "url": "https://github.com/wangmchn/WMPageController/issues/631", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2145082886
Permissions Change Followed what https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action#writing-the-action-code had. Lets see if docker container is happy about that. Nice
gharchive/pull-request
2024-02-20T19:05:37
2025-04-01T04:36:16.399888
{ "authors": [ "bmburlingame", "ksolkowski" ], "repo": "wantable/github-actions-haml-lint", "url": "https://github.com/wantable/github-actions-haml-lint/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2280699547
Intents "Analyzers" platform A great addition to our intent engine would be the ability for users to write conditions based on the transaction they're asking to sign. Currently, we interpret the signature request payload with SignMethods. Users are able to specify, for example, the "Ethereum SignMethod" and the input payload will be parsed as an Ethereum unsigned transaction. Some metadata can be extracted from it (amount, to, etc.). We're not doing it yet, but this metadata can be put in the shield evaluator environment. Users could write intents referencing the extracted metadata. What I would like to have instead is a CosmWasm contract that does this job. Instead of specifying a SignMethod from a fixed enum, with each SignMethod handler embedded in wardend, the users could specify a list of smart contract addresses that parses the input payload and provides the same metadata. This is a paradigm shift, as we allow 3rd party developers to build their own "Metadata Providers" (do you have a better name?). For example, we can have a 3rd party developer add support for parsing Solana transactions or who knows what, as long as the parser can be written as a CosmWasm contract. The flow would look like this: user sends a MsgNewSignatureRequest, specifying one or more contracts (instead of SignMethod) the msg server handler from the Cosmos SDK module invokes the smart contracts and receive the execution results in a standardized format. It'll store the metadata in the Action, as part of the frozen intent definition. every time the Action is evaluated, the shield's execution environment will read the metadata present in the Action I'll rename "Metadata Parsers" => "Analyzers" it's shorter and imho more evocative receive the execution results in a standardized format Do you have any thoughts about this format, how it may looks and what it should describe? receive the execution results in a standardized format Do you have any thoughts about this format, how it may looks and what it should describe? yeah good question! The contract should return a collection of key-values. The key should be a string (it will be an identifier in the shield language), and the value could be any supported type in the shield AST (i.e. integers and booleans, for now). By default, CosmWasm returns the message in JSON format to the SDK. I thought of reusing this encoding, but there is no easy way of doing something like this in Rust: type AnalyzerResult struct { Values []KV } type KV struct { Key string Value any } So what I did in #279 instead is to have a custom JSON object like, for the sample contract: #[cw_serde] pub struct AnalyzeResult { pub length: u64, } that will serialize into a JSON like this: { "length": 2 } Now, from the SDK module I'm unmarshalling this JSON into a map[string]any, then I have a type-switch to do a type check of the any value and converting it into the proper AST node. In the example above I would have a key named length and a value float64(2). I check that the float64 is actually an integer and finally I'll convert it to an IntegerLiteral of shield's AST. What I'm doing now is to get this a step further: beside key-value pairs I want analyzer contracts to return an optional data_for_signing binary, so the new AnalyzerResult looks like this: // common result to all analyzers: #[cw_serde] pub struct AnalyzeResult<T> { pub data_for_signing: Option<Binary>, pub result: T, } // specific for my basic-analyzer, will be the <T> above: #[cw_serde] pub struct BasicAnalyzerResult { pub length: u64, } If an analyzer returns a data_for_signing, it will replace the DataForSigning field on the user request. The result field is parsed by the Go code described above. Do you have other ideas or see any flaws in that? Wow! Thanks for examples, it really helps. My suggestion for now, as before for v1 intents, is to limit structure size, but I can't provide the way to do it on contract side. May be it's just can be done in go part. Another potential problem is that user should trust the contract. What if 3rd party contracts will parse input and return AnalyzeResult but also make a transfer from the caller? My suggestion for now, as before for v1 intents, is to limit structure size, but I can't provide the way to do it on contract side. May be it's just can be done in go part. yeah I think #267 already captures that, the intent per-se might have a hard limit. I think the gas cost already disincentivizes to abuse calling too many smart contracts Another potential problem is that user should trust the contract. What if 3rd party contracts will parse input and return AnalyzeResult but also make a transfer from the caller? this is correct, the user in its Intent references a contract so it needs to be trusted (note that this always apply when you use a smart contract on any chain). I don't think a cosmwasm contract can initiate a transfer, but I might be wrong, if there's a way to limit capabilities of specific contracts we should look into that.
gharchive/issue
2024-05-06T11:51:56
2025-04-01T04:36:16.410786
{ "authors": [ "Pitasi", "mn13" ], "repo": "warden-protocol/wardenprotocol", "url": "https://github.com/warden-protocol/wardenprotocol/issues/262", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2453420428
changed jvmToolchain function for better compatibility this only changes the function used for better compatibility with projects using older kotlin android plugin <1.7.20 @T-eli Thanks!
gharchive/pull-request
2024-08-07T12:56:07
2025-04-01T04:36:16.436972
{ "authors": [ "T-eli", "wasabeef" ], "repo": "wasabeef/flutter_ua_client_hints", "url": "https://github.com/wasabeef/flutter_ua_client_hints/pull/119", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2456477072
feat(examples): add http image resizer API component This commit adds an image resizer component that operates over HTTP written in Rust to the examples in order to showcase a slightly more complicated component. This component copies some API cues from projects like Thumbor and Imagor, and in the general flow can do the following: Accept an uploaded image or download one Upload the original image to linked blob storage Transform the image via operations specified in the HTTP request Upload the transformed image to linked storage Return the transformed image to the user Closes #2728 @vados-cosmonic is this one ready for review yet? Ah sorry this got lost in the sands of time! Will polish it up and get it ready for review! Closing this for now, ground has shifted under this PR, and other things have taken priority, will open again when I've found time to get to it!
gharchive/pull-request
2024-08-08T19:25:38
2025-04-01T04:36:16.444232
{ "authors": [ "brooksmtownsend", "vados-cosmonic" ], "repo": "wasmCloud/wasmCloud", "url": "https://github.com/wasmCloud/wasmCloud/pull/2716", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
506092382
On deploy, Also update the crate version Currently, the rust crate is noot being updated with each deploy with lerna. Would also be nice to have some post deploy script to sync the cargo.toml with the package.json 😄 I don't believe this is relevant now that we've rewritten the package as @wasmer/sdk. The Release Please job will automatically bump the version number in package.json when a release is made. We're deliberately leaving the Rust crate at version 0.0.0 and setting publish = false because it isn't the entire package - we do some bundling and have an additional lib.ts file which sets things up and re-exports the WebAssembly code.
gharchive/issue
2019-10-11T23:12:12
2025-04-01T04:36:16.446185
{ "authors": [ "Michael-F-Bryan", "torch2424" ], "repo": "wasmerio/wasmer-js", "url": "https://github.com/wasmerio/wasmer-js/issues/117", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
315484808
/annotations/instances_valminusminival2018.json file missing Trying to train Mask R CNN, I get the error that above file is missing. Did you try to use your created dataset for a model like Mask R CNN? Maybe there are more files missing. Eager to hear from you and thx. I am not sure whether my problem is the same one in this topic. When I tried to run visualize_coco.ipynb, it said there's no file named instances_shape_train2018.json. Then I found it via your article in which I downloaded shapes_train_dataset to replace the one given by the Jupyter notebook example. Maybe it's a good idea to update the examples/shapes/train content with corresponding json file, or indicate the download link in README.md? It is not in GitHub, I think you can download from https://patrickwasp.com/wp-content/uploads/2018/04/shapes_train_dataset.zip You need to comment out the line in coco.py where it tries to load that (it's trying to append an additional set of data for training). Also change the import minival line to just import val.
gharchive/issue
2018-04-18T13:41:28
2025-04-01T04:36:16.449655
{ "authors": [ "BennoStaub", "austinmw", "gaqiness", "hiankun" ], "repo": "waspinator/pycococreator", "url": "https://github.com/waspinator/pycococreator/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2393080810
Use kqueue backend on BSDs Apologies for the necro in the older issue, I genuinely thought my symptoms were identical. I hope opening a new issue is appropriate, but if not please gently guide me in the right direction. I am running FreeBSD 14.0 and I find that the watch feature doesn't pick up immediately on changes, sometimes taking 30+ seconds or (rarely) hanging indefinitely. The root problem appears to be that cargo-watch pulls in watchexec 1.17.2, which in turn pulls in notify 4.0.18 which relied on a polling backend on BSDs. notify does have support for a kqueue backend as of 5.0.0-pre.11, and the most straightforward way I can see to get this backend into cargo-watch would be to upgrade the watchexec dependency to (at minimum) 2.0.0-pre.0. I've forked and spent a few hours attempted to upgrade watchexec to 2.0.0-pre.0, and understandably it is a difficult refactor, mostly due to the API having changed significantly between watchexec 1.17.2 and watchexec 2.0.0-pre.0, especially in regards to handlers and configuration. I think I first need to familiarize myself with both versions of watchexec and the organization of cargo-watch before I could even think of pulling off a refactor, but if you have any suggestions that would be greatly appreciated. I maintain and develop both. Watchexec post 1.17 has issues with filtering that I believe would lead to a slew of complaints from cargo-watch use, even if they have workarounds; after many iterations over the past few years I aim to definitely fix these sometime in the next six months, then port cargo-watch over. I would consider a ''beta'' cargo-watch port to the current version of the watchexec lib if so contributed, but won't do that myself before the above timeline to avoid burnout. If you do attempt: do not upgrade to 2.0.0-pre.N. Go for the current release. The pre releases were all unstable and changed wildly before 2.0.0; there's zero advantage to using those. Alternatively you could try to retrofit a newer notify into watchexec 1.17; I'd accept a PR for that over there and make a patch release (possibly gated behind a feature to obey semver) so it can then be pulled into current cargo watch.
gharchive/issue
2024-07-05T19:59:00
2025-04-01T04:36:16.454684
{ "authors": [ "passcod", "ryanavella" ], "repo": "watchexec/cargo-watch", "url": "https://github.com/watchexec/cargo-watch/issues/311", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2602922794
About the data. Hi, guys, Thanks for your interesting work. Can you provide some examples about the rgbd data, such as birmingham_block_0.png birmingham_block_0.tiff(after processing by cloudcompare). I want to ensure the rightness of the preprocessed data. Thank you very much. Is there any additional operation when exporting depth data (.tif)? such as, set the empty cells using leave empty? or other strategies? Can u give the url of the web-based 3D flight simulator? https://www.dropbox.com/scl/fi/hbj4mra1wpedjd5vzjq8b/rgbd.zip?rlkey=5miuvjyn83zf5e5lu4y0w834r&dl=0 Have you solved this problem? I am encountering the same issue. Thank you so much! https://www.dropbox.com/scl/fi/hbj4mra1wpedjd5vzjq8b/rgbd.zip?rlkey=5miuvjyn83zf5e5lu4y0w834r&dl=0 This link is not accessible.
gharchive/issue
2024-10-21T15:31:55
2025-04-01T04:36:16.458218
{ "authors": [ "EdenGabriel", "enjoysport2022", "miyatai2" ], "repo": "water-cookie/citynav", "url": "https://github.com/water-cookie/citynav/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
892284980
[OpenGOAL] make multiplication/divsion like GOAL and support in decompiler This makes power-of-two multiplies and divides optimized to shifts work correctly in the compiler/decompiler. There is a possibility of false detection. The behavior will still be correct, but there might be a multiply/divide when the original code had a shift. The decompiler uses these rules to pick between shifting and multiply/divide. bitfield access always wins over a multiply or divide. Accessing bitfields will not be affected by this change. Large shifts (over 10, or multiply/divide by 1024) are not converted to multiplication or division. Unsigned divides (logical shift) are never converted to division. Too many false positives and GOAL programmers didn't use uint types very often. This also adds the same optimization to OpenGOAL and implements unsigned division. Previously it just did signed division every time. Pull Request Test Coverage Report for Build 843605553 121 of 149 (81.21%) changed or added relevant lines in 10 files are covered. 3 unchanged lines in 2 files lost coverage. Overall coverage increased (+0.01%) to 69.213% Changes Missing Coverage Covered Lines Changed/Added Lines % common/util/BitUtils.h 8 9 88.89% decompiler/IR2/FormExpressionAnalysis.cpp 36 42 85.71% decompiler/IR2/bitfields.cpp 17 38 44.74% Files with Coverage Reduction New Missed Lines % decompiler/IR2/FormExpressionAnalysis.cpp 1 80.72% decompiler/IR2/Form.cpp 2 73.99% Totals Change from base Build 843069498: 0.01% Covered Lines: 34469 Relevant Lines: 49801 💛 - Coveralls
gharchive/pull-request
2021-05-14T22:32:21
2025-04-01T04:36:16.482677
{ "authors": [ "coveralls", "water111" ], "repo": "water111/jak-project", "url": "https://github.com/water111/jak-project/pull/483", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
1847983696
chore(master): release 1.2.27 :robot: I have created a release beep boop 1.2.27 (2023-08-12) Miscellaneous deps: update dependency pestphp/pest to v2.13.0 (d5b5076) This PR was generated with Release Please. See documentation. :robot: Release is at https://github.com/wayofdev/laravel-symfony-serializer/releases/tag/v1.2.27 :sunflower:
gharchive/pull-request
2023-08-12T12:30:38
2025-04-01T04:36:16.562980
{ "authors": [ "lotyp" ], "repo": "wayofdev/laravel-symfony-serializer", "url": "https://github.com/wayofdev/laravel-symfony-serializer/pull/121", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2415810076
Generate final tag and publish draft release for Wazuh 4.8.1 Description This issue attempts to publish draft release and generate final tag for version 4.8.1 in the wazuh-security-dashboards-plugin repository. Tasks [ ] Generate final tag v4.8.1 from branch 4.8.1 on wazuh-security-dashboards-plugin repository. [ ] Publish GitHub draft release from tag v4.8.1. I'm closing this issue as the repository will not be used until version 4.9.0
gharchive/issue
2024-07-18T09:10:47
2025-04-01T04:36:16.698996
{ "authors": [ "Tostti", "davidjiglesias" ], "repo": "wazuh/wazuh-security-dashboards-plugin", "url": "https://github.com/wazuh/wazuh-security-dashboards-plugin/issues/69", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
909564502
When onElementClick is defined, user can't navigate in the flow on top of nodes When defining a onElementClick method on the ReactFlow component, the end user can not navigate on top of existing nodes. It can be easily reproduced on the following example https://reactflow.dev/examples/interaction/. Navigate on top of nodes works fine Click on the : capture onElementClick checkbox You can not navigate anymore on top of nodes I guess it is the default behaviour, but it is possible override that ? I would like to have clickable elements in read only mode. One workaround would be to have a specific keyboard to hold to navigate in the flow, but is that doable ? Hey @clement-faure you are right. This is the default behaviour. You could implement your own solution with a keyboard event that toggles elementsSelectable and onElementClick. Hey @clement-faure you are right. This is the default behaviour. You could implement your own solution with a keyboard event that toggles elementsSelectable and onElementClick. Well noted, thank you.
gharchive/issue
2021-06-02T14:28:18
2025-04-01T04:36:16.792846
{ "authors": [ "clement-faure", "moklick" ], "repo": "wbkd/react-flow", "url": "https://github.com/wbkd/react-flow/issues/1229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
745275749
How to find the ReactFow's grid position ? Background: I am building Toolbox which will have list of Nodes. User can drag and drop Nodes into ReactFow component. Using native onDrop function, I am able to add node into ReactFow - by updating the setElements Now i would like to add node with position property (it should be exactly the dropped location) onDrop event object gives me clientX, clientY but i am not sure how will i convert that position into ReactFow's grid position Question: How do i match the drop location with ReactFow's grid position? Could you please help me I am dealing with the same thing. An example of adding a node can be seen here You can use project to convert positions. Read here Now I'm looking for how to catch ref for root div.
gharchive/issue
2020-11-18T02:35:48
2025-04-01T04:36:16.797345
{ "authors": [ "YuriyKrasilnikov", "jagadeeshpalaniappan" ], "repo": "wbkd/react-flow", "url": "https://github.com/wbkd/react-flow/issues/695", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1146155556
fix: export type OnNodesDelete and OnEdgesDelete Refs: https://github.com/wbkd/react-flow/pull/1555#issuecomment-1043150079, e011375c3d25765c56bb790de4a216ca0bcfabfa 👍 thanks @Himself65
gharchive/pull-request
2022-02-21T19:25:17
2025-04-01T04:36:16.798799
{ "authors": [ "Himself65", "moklick" ], "repo": "wbkd/react-flow", "url": "https://github.com/wbkd/react-flow/pull/1918", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1174491305
A temporary solution of incompatible of Zotodo Open Edit→Preference Open Advanced→Advanced Configuration→Config Editor NOTE: THIS IS A DANGEROUS OPERATION Add a boolean key named extensions.checkCompatibility.6.0 and the value is false Save and restart Zotero This is a temporary solution, the key should be removed after the plugin fixed. By the way, update the plugin to compatible with Zotero 6.x is the best way. Hope the author of Zotodo have time to update this useful plugin! Anyway, is there anyone knows that why this plugin is incompatible with the 6.x version? Thx Third this! I am a big fan of this add-on and would appreciate it working again for Zotero 6 :) 👍 This should be fixed as of the release of v0.8.2. Please let me know if you still have issues!
gharchive/issue
2022-03-20T09:55:25
2025-04-01T04:36:16.805336
{ "authors": [ "jeongeunpark18", "wbthomason", "xiaodl813" ], "repo": "wbthomason/zotodo", "url": "https://github.com/wbthomason/zotodo/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
491212409
[Question] PATH Is it possible to somehow use this role to EDIT (rather than add) to the environment? Specifically, I would like to add to $PATH, but what actually happens is the existing one is overwritten. I also tried PATH: "$PATH:/foo/bar/" Hi, you can edit/replace the existing PATH definition as such: --- - hosts: all roles: - weareinteractive.environment vars: environment_config: PATH: "/foo/bar:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games/" But you can't add another PATH definition as it's using the lineinfile module. Hope this helps
gharchive/issue
2019-09-09T16:56:03
2025-04-01T04:36:16.847148
{ "authors": [ "franklinkim", "lonix1" ], "repo": "weareinteractive/ansible-environment", "url": "https://github.com/weareinteractive/ansible-environment/issues/16", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
172980357
make 24 time selection optional have an option not to show the 24 hour selection, so just the 12 hour selection is shown. sorry, didn't see the twelvehour option. thanks.
gharchive/issue
2016-08-24T15:16:43
2025-04-01T04:36:16.848007
{ "authors": [ "clemsontiger" ], "repo": "weareoutman/clockpicker", "url": "https://github.com/weareoutman/clockpicker/issues/92", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
294702252
[WIP] Update to Prometheus 2.0 YAML-based rule format Update to Prometheus 2.0 YAML-based rule format There's some trickiness here regarding what format to return from parsing, regarding the ability to track alert states, and not being able to create final rule groups yet. That's laid out in the comment above RulesConfig.Parse(). Fixes https://github.com/weaveworks/cortex/issues/622 This is based on the vendoring updates in https://github.com/weaveworks/cortex/pull/688. @jml This is basically ready and working, but marked [WIP] only because: I need to make slight readability improvements to error reporting to the user. I need to check whether we want to change the evaluation metrics a bit, due to the fact that we're now executing multiple groups per user, with multiple rules each. We can't just deploy this without having a conversion plan for existing rule files in place. Great, thanks. I'm stretched a bit thin, so I'll wait until you've addressed the first two points and try to find someone else to look into conversion plan. If I can't find someone else, will do it myself. @jml I rebased ontop of the latest master and added a couple more fixups and improvements. The Prometheus rules YAML parser can return multiple errors if it finds multiple problems, but I wasn't sure how to best present those to the user, so I simply opted to just always show the first one (so then the user can still iteratively fix their errors). In terms of metrics, we are still measuring the durations of each rule group (as before), except that previously all rules for a user were dumped into one large group, whereas now a user can specify groupings themselves in the new YAML rules config. So you will see more, but smaller groups, and accordingly faster per-group evaluation durations. The metric that tracks the overall latency of a scheduler work item completion is still the same, except for a rename / help text update to reflect that it's not about one rule group, but a whole set of them for a given config. I think this should be ready from a code perspective now, but I'm keeping the [WIP] so that nobody accidentally merges it before we have a transition plan. A transition plan should include converting all existing user configs to the new format, updating example/default configs, Cortex documentation around configs, and maybe notifying users of the change. Do we also want to rename the current "prometheus-1518408565633.rules"-style names to end with ".yml"? Thanks Julius! No movement on a transition plan. I haven't found anyone w/ spare cycles to think about it. Will keep pinging. Renaming seems sensible. On Mon, 12 Feb 2018 at 05:40 Julius Volz [email protected] wrote: @jml https://github.com/jml I rebased ontop of the latest master and added a couple more fixups and improvements. The Prometheus rules YAML parser can return multiple errors if it finds multiple problems, but I wasn't sure how to best present those to the user, so I simply opted to just always show the first one (so then the user can still iteratively fix their errors). In terms of metrics, we are still measuring the durations of each rule group (as before), except that previously all rules for a user were dumped into one large group, whereas now a user can specify groupings themselves in the new YAML rules config. So you will see more, but smaller groups, and accordingly faster per-group evaluation durations. The metric that tracks the overall latency of a scheduler work item completion is still the same, except for a rename / help text update to reflect that it's not about one rule group, but a whole set of them for a given config. I think this should be ready from a code perspective now, but I'm keeping the [WIP] so that nobody accidentally merges it before we have a transition plan. A transition plan should include converting all existing user configs to the new format, updating example/default configs, Cortex documentation around configs, and maybe notifying users of the change. Do we also want to rename the current "prometheus-1518408565633.rules"-style names to end with ".yml"? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/weaveworks/cortex/pull/689#issuecomment-364833114, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHq6qZ-bI1xfTXHxGjBaN7GcT10cZ7Yks5tT87BgaJpZM4R6zUO . @jml I completely re-did this PR ontop of https://github.com/weaveworks/cortex/pull/719, which touched all the same code places. I also added flag-based binary-wide support for setting the rule format, with it still defaulting to v1. Ideally this should be deployable without breaking anything unless you explicitly set flags to indicate a v2 rule format. @juliusv can you check your dep command please - this PR seems to bring in a bunch of *_test.go files which were removed in #705. Maybe you have an older version? @bboreham As discussed on Slack, I rebased this PR ontop of latest master, ran dep ensure (with newest dep version) again, and squashed the extra vendoring changes into the original vendor update commit.
gharchive/pull-request
2018-02-06T10:14:21
2025-04-01T04:36:16.865532
{ "authors": [ "bboreham", "jml", "juliusv" ], "repo": "weaveworks/cortex", "url": "https://github.com/weaveworks/cortex/pull/689", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
618810439
Helm chart appVersion did not increase the version increment I guess it's best practice, that when increasing the appVersion of a helm chart, to also increase the version. For us, we had the chart version pinned, and expected the chart to not change (which now it does). See also: https://github.com/weaveworks/flagger/commit/99bc7040a351215221f9732c529c6839406660d3#diff-b0edf28aa46d5470ace08b8b3b74e8b1 Ah sorry for that, this is a bug in the release script as it should bump the chart version as well.
gharchive/issue
2020-05-15T08:56:47
2025-04-01T04:36:16.880117
{ "authors": [ "sjentzsch", "stefanprodan" ], "repo": "weaveworks/flagger", "url": "https://github.com/weaveworks/flagger/issues/590", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
701343239
Do not promote when not ready on skip analysis Fixes #362 ps: I couldn't find an end2end test for the skip analysis feature so I didn't update any. @worldtiki have you managed to reproduce the condition when Flagger would promote a canary with 0 healthy pods? If so then it will be great to add an e2e test to https://github.com/weaveworks/flagger/blob/master/test/e2e-istio-tests.sh @worldtiki have you managed to reproduce the condition when Flagger would promote a canary with 0 healthy pods? If so then it will be great to add an e2e test to https://github.com/weaveworks/flagger/blob/master/test/e2e-istio-tests.sh Sure. Should I add a new test or create a separate file? That one looks like it's only testing happy paths where as this is more of an edge case. @worldtiki you could add a new file and append it to Istio ClircleCI job definition here https://github.com/weaveworks/flagger/blob/master/.circleci/config.yml#L97 Thanks! Btw, after running some tests I think I found a second issue that could explain why you were not able to reproduce this (as you mentioned in the linked issue). https://github.com/weaveworks/flagger/blob/master/pkg/apis/flagger/v1beta1/canary.go#L431 I believe this should be OR and not AND.
gharchive/pull-request
2020-09-14T18:50:34
2025-04-01T04:36:16.884533
{ "authors": [ "stefanprodan", "worldtiki" ], "repo": "weaveworks/flagger", "url": "https://github.com/weaveworks/flagger/pull/695", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
353792103
Limit the number of processes that are reported Just been looking at a couple of machines that each had 25,000 defunct nginx processes. The Scope probe was using about 2 CPUs to report them, and the back-end about the same to render them (and I wasn't even looking at the processes page). We could just stop reporting processes after 2,000 or 4,000 or whatever seems like a reasonable limit. Also maybe not report defunct processes? Also maybe not report defunct processes? That seems entirely reasonable. We could just stop reporting processes after 2,000 or 4,000 or whatever seems like a reasonable limit. How do we decide which 2000 or 4000 processes have to be considered? :thinking: I meant as a guard against overload, in an error situation - the limit should be higher than you expect to get actual processes. It might be more obvious to drop the entire set of processes rather than arbitrarily truncating at some number. Note how this commit may become redundant when we have fixed this on the client side, in which case the querier requests could be lowered back down "this commit" 404s for me. And it's in a private repo.
gharchive/issue
2018-08-24T13:51:20
2025-04-01T04:36:16.891067
{ "authors": [ "bboreham", "bricef", "rade", "satyamz" ], "repo": "weaveworks/scope", "url": "https://github.com/weaveworks/scope/issues/3330", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1112837130
Notification for only installed Profiles Short description As a user I want to only be notified for new versions for installed profiles so that I don't became overwhelmed with notification that I'm not interested in. Right now regarding https://github.com/weaveworks/weave-gitops/issues/1143 and the PR https://github.com/weaveworks/weave-gitops/pull/1317 we are sending notification for all profiles regardless of their status regarding to the cluster. However, detecting if a profile is installed is not trivial. Questions are where is it installed? In this cluster? In a repo? In any of the other clusters? A potential other solutions is defining fine grained filters which a user could set to avoid getting notified for a specific profile which would help in certain cases, like not caring if it's installed or not. Acceptance criteria notifications can be filtered with either a filter or by detecting installed status Now it's possible after https://github.com/weaveworks/weave-gitops/pull/1360 has been done. An installed profile into a cluster can be detected by looking for a specific HelmRelease with namespace, name, profiles name and version.
gharchive/issue
2022-01-24T16:02:21
2025-04-01T04:36:16.894401
{ "authors": [ "Skarlso" ], "repo": "weaveworks/weave-gitops", "url": "https://github.com/weaveworks/weave-gitops/issues/1337", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
984110270
add gitlab for --app-config-url=none path Related To: #666 What changed? Removes hardcoded github code and allows the path --a until the deploy key stuff is setup for add the outcome is Error: error setting up deploy keys: error getting account type: could not get account type GET https://gitlab.com/api/v4/groups/J-Thompson12: 404 {message: 404 Group Not Found} Why? How did you test it? manually and acceptance tests Release notes Documentation Changes @J-Thompson12 LGTM, but lets wait for another reviewer to take a look. Can you create yourself a Gitlab org and test manually for now? It's ok to add the Gitlab story in small pieces, and this PR makes the code base better, so don't feel like you have to solve the whole Gitlab story in one go. Let's keep the issue open until we know that Gitlab works.
gharchive/pull-request
2021-08-31T17:29:22
2025-04-01T04:36:16.897804
{ "authors": [ "J-Thompson12", "jpellizzari" ], "repo": "weaveworks/weave-gitops", "url": "https://github.com/weaveworks/weave-gitops/pull/701", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
379672465
kernel log warning regarding usage of --physdev-out What you expected to happen? Weave net started using physdev module in its weave-npc component to efficiently identify the traffic originating or destined to local pods refer https://github.com/weaveworks/weave/issues/3344 -A WEAVE-NPC -m physdev --physdev-out vethwe-bridge -j ACCEPT -A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge -j RETURN Usage of this module should not have any effect on performance, regression in functionality etc. What happened? We see below error in /var/log/kern.log Nov 12 14:00:49 weave-master kernel: [ 526.420089] xt_physdev: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore. How to reproduce it? Message in kernel log should come up as soon as 2.5.0 release istalled Versions: $ weave version weave 2.5.0 $ docker version $ uname -a Linux weave-master 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux $ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Seeing the same thing, doesn't appear to cause an issue other than spaming the terminal although others report it causes performance issues (see https://www.redhat.com/archives/libvir-list/2013-January/msg01107.html). Possible fix seems to be... iptables -D WEAVE-NPC -m physdev --physdev-out vethwe-bridge -j ACCEPT iptables -A WEAVE-NPC -m physdev --physdev-is-bridged --physdev-out vethwe-bridge -j ACCEPT ...I haven't tested significantly as I'm using a really small test env. thanks @philipmather for your suggestion. I tested out suggested change it does mute the log message. Care to raise a PR please? Note that for fix we dont need to delete the rule, as chains are flushed whenever weave-net pod is restarted. seeing the same on 2.5.0 Fixed in #3453, shall be part of 2.5.1 @murali-reddy when do you plan to release 2.5.1 ? hi @murali-reddy would be great to have a build with fixed logs, i used 2.5.0 for a week on test env and in general it works well for me. I want to promote it but this log flood is concern. Is it possible to release 2.5.1 with fix or at least have some build? thank you
gharchive/issue
2018-11-12T08:40:36
2025-04-01T04:36:16.904831
{ "authors": [ "murali-reddy", "notmaxx", "philipmather" ], "repo": "weaveworks/weave", "url": "https://github.com/weaveworks/weave/issues/3449", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
520867563
Add weave-daemonset-k8s-1.11.yaml to release script This is the manifest for Kubernetes 1.11 and above. It was added to release 2.6 by hand, but this change will mean it gets included automatically next time. LGTM
gharchive/pull-request
2019-11-11T10:25:28
2025-04-01T04:36:16.906142
{ "authors": [ "bboreham", "murali-reddy" ], "repo": "weaveworks/weave", "url": "https://github.com/weaveworks/weave/pull/3735", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2467524152
Getting started video Description Verba is awesome! Readme is cool, but I think it would be more impactful if there is a simplified video tutorial, showcasing how to set it up Is this a bug or a feature? [ ] Bug [x] Feature Steps to Reproduce Additional context Agreed, working on that! https://www.youtube.com/watch?v=2VCy-YjRRhA&t=40s&ab_channel=Weaviate•VectorDatabase tada 🥳
gharchive/issue
2024-08-15T06:54:06
2025-04-01T04:36:16.908597
{ "authors": [ "anandbhaskaran", "thomashacker" ], "repo": "weaviate/Verba", "url": "https://github.com/weaviate/Verba/issues/259", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1970868666
Add Support for object[] data type I am trying to create a class object with the schema (provided below) with one of the data-types as object[]. However, when I print the schema the detected data-type of my metadata field becomes text. And then I check the code and noticed that python package does not support object[] data-type (code). Can someone add the support for this data-type. (Ref: func) {'dataType': ['text'], 'description': "This property was generated by Weaviate's auto-schema feature on Tue Oct 31 16:07:28 2023", 'indexFilterable': True, 'indexSearchable': True, 'name': 'metadata', 'tokenization': 'word'}] # schema class_obj = { "classes": [ { "class": self.index_name, "vectorizer": "none", "properties": [ { "name": "identifier", "dataType": ["text"], }, { "name": "text", "dataType": ["text"], }, { "name": "metadata", "dataType": ["object[]"], "nestedProperties": [ { "name": "data_type", "dataType": ["text"], }, { "name": "doc_id", "dataType": ["text"], }, { "name": "url", "dataType": ["text"], }, { "name": "hash", "dataType": ["text"], }, { "name": "app_id", "dataType": ["text"], }, ], }, ], } ] } Hi @deven298, for clarity on your problem, what versions of the Python client and Weaviate server are you using? Hey @tsmith023 I am using weaviate-client v3.24.2 python package. And what about your Weaviate server version? My weaviate server version is 1.22.1 Okay, that makes sense then! Please update your Python client to v3.25.2 that has support for object and object[] types 😁
gharchive/issue
2023-10-31T16:40:57
2025-04-01T04:36:16.918205
{ "authors": [ "deven298", "tsmith023" ], "repo": "weaviate/weaviate-python-client", "url": "https://github.com/weaviate/weaviate-python-client/issues/599", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2000839337
fetch_objects() not returning as expected when neither return_metadata nor return_properties are provided. This issue applies to the Python v4 client ( weaviate-client==4.3b2) The docstring of the fetch_objects() method in weaviate/collections/queries/fetch_objects.py states that: """ NOTE: If neither `return_metadata` nor `return_properties` are provided then all properties and metadata are returned except for `metadata.vector`. """ However, this seem not to be the case. Rather, none of them are returned. :white_check_mark: Example 1 (works as expected): fetch_objects() called with providing return_properties=['chunk_id, 'text']. :x: Example 2 (does not work as expected): fetch_objects() called without providing anything. Hi @axeloh, thanks for flagging this inconsistency between the docs and the implementation! This behaviour was indeed changed with the latest version so I will update the docs accordingly. The intended behaviour is now as follows: If no return_properties then return all properties If no return_metadata then return no metadata Always return the UUID of the objects If include_vector=True then return the vector of the objects else don't One of the key aspects in this update is that your Weaviate version must also be bumped accordingly. This is because the logic for returning all the properties if return_properties=None must be implemented server-side (the client has no way of knowing what properties there are in a class without first querying the schema). If you bump your Weaviate minor version then Example 2 should work as I have described above! Cheers 😁 Just to clarify for others, I was seeing this behavior using weaviate-client==4.3b2 with Weaviate 1.22.3, but no longer with Weaviate 1.22.5. Unless I am missunderstanding something, this should be fixed. Please open a new issue in case it is not :)
gharchive/issue
2023-11-19T12:55:45
2025-04-01T04:36:16.924959
{ "authors": [ "axeloh", "dirkkul", "kylrth", "tsmith023" ], "repo": "weaviate/weaviate-python-client", "url": "https://github.com/weaviate/weaviate-python-client/issues/607", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1945153209
Support function type splitChunks We should support function type configs of splitChunks. For example, config = { splitChunks: { cacheGroups: { shared: { chunks: "all", test: /shared/, filename: data => `shared-${data.chunk.name || data.chunk.id}.js`, enforce: true }, common: { chunks: "all", test: /common/, enforce: true } } } } @JSerFeng is it supported now? Yes, we can close this
gharchive/issue
2023-10-16T12:53:41
2025-04-01T04:36:16.932347
{ "authors": [ "JSerFeng", "hardfist" ], "repo": "web-infra-dev/rspack", "url": "https://github.com/web-infra-dev/rspack/issues/4333", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1898876447
style(rspack_plugin_progress): Change the formatting style to match WebpackBar Summary This PR improves the ProgressPlugin style to match WebpackBar. Github Issue: https://github.com/web-infra-dev/rspack/issues/3967 Test Plan Go to any example project under examples Update the rspack configuration to: module.exports = { builtins: { progress: { profile:true} } } Require Documentation? [x] No [ ] Yes, the corresponding rspack-website PR is __ thanks How to use in rspack 1.0
gharchive/pull-request
2023-09-15T18:11:17
2025-04-01T04:36:16.935858
{ "authors": [ "Hamzakh777", "hardfist", "lzxb" ], "repo": "web-infra-dev/rspack", "url": "https://github.com/web-infra-dev/rspack/pull/4201", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2345099418
[Feature]: linting, formatting, husky, eslint and prettier What problem does this feature solve? when we create a rspress project, can we have eslint + prettier + default template comes with it to run through all the components and frontmatter? ex: I would like husky to run npm run format for precommit message check What does the proposed API look like? Should comes with CLI We do not expect to maintain these in the template, and users can choose any solutions and using them according to the corresponding official documents. Agree, ESLint already provides a pretty nice initializer: npm init @eslint/config@latest https://eslint.org/docs/latest/use/getting-started And prettier setup is quite easy
gharchive/issue
2024-06-11T00:47:26
2025-04-01T04:36:16.938395
{ "authors": [ "Timeless0911", "chenjiahan", "zmzlois" ], "repo": "web-infra-dev/rspress", "url": "https://github.com/web-infra-dev/rspress/issues/1165", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2608375405
Add a key to SVG Compare with the latest draft file: https://github.com/web-platform-dx/web-features/blob/main/features/draft/spec/fill-stroke-3.yml Thanks for the feedback! I basically added this to SVG because all the other stroke-* properties are there already. Do we need to look at the stroke-* properties again and decide for one of your options?
gharchive/pull-request
2024-10-23T12:13:03
2025-04-01T04:36:16.939894
{ "authors": [ "Elchi3" ], "repo": "web-platform-dx/web-features", "url": "https://github.com/web-platform-dx/web-features/pull/2054", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
358054336
Arq sets java.library.path to LD_LIBRARY_PATH to enable Tomcat Native Arquillian tests start Tomcat with natives enabled, if natives are available: org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library [1.2.17] using APR version [1.6.3]. org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.0.2n 7 Dec 2017] run tests @jfclere Not fit for merging yet. If the LD_LIBRARY_PATH is not defined, it does this weird thing: INFO: Starting Tomcat with: [/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java, -Djava.util.logging.config.file=/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11/conf/logging.properties, -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager, -Dcom.sun.management.jmxremote.port=8089, -Dcom.sun.management.jmxremote.ssl=false, -Dcom.sun.management.jmxremote.authenticate=false, -Djava.security.egd=file:/dev/urandom, ${env.LD_LIBRARY_PATH}, -Dorg.jboss.byteman.verbose, -Djboss.modules.system.pkgs=org.jboss.byteman, -Dorg.jboss.byteman.transform.all, -javaagent:/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/tomcat-jta/target/lib/byteman.jar=script:/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/tomcat-jta/target/test-classes/scripts.btm,listener:true, -classpath, /home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11/bin/bootstrap.jar:/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11/bin/tomcat-juli.jar, -Dcatalina.base=/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11, -Dcatalina.home=/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11, -Djava.io.tmpdir=/home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11/temp, org.apache.catalina.startup.Bootstrap, -config, /home/jenkins/jenkins/workspace/narayana-tomcat/9e2742d2/apache-tomcat-9.0.11/conf/server.xml, start] Could not load Logmanager "org.apache.juli.ClassLoaderLogManager" java.lang.ClassNotFoundException: org.apache.juli.ClassLoaderLogManager at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.util.logging.LogManager$1.run(LogManager.java:195) at java.util.logging.LogManager$1.run(LogManager.java:181) at java.security.AccessController.doPrivileged(Native Method) at java.util.logging.LogManager.<clinit>(LogManager.java:181) at java.util.logging.Logger.demandLogger(Logger.java:448) at java.util.logging.Logger.getLogger(Logger.java:502) at com.sun.jmx.remote.util.ClassLogger.<init>(ClassLogger.java:55) at sun.management.jmxremote.ConnectorBootstrap.<clinit>(ConnectorBootstrap.java:851) at sun.management.Agent.startAgent(Agent.java:257) at sun.management.Agent.startAgent(Agent.java:447) Can't load log handler "1catalina.org.apache.juli.AsyncFileHandler" java.lang.ClassNotFoundException: 1catalina.org.apache.juli.AsyncFileHandler And Tomcat won't start. Gonna rework the PR.
gharchive/pull-request
2018-09-07T12:50:21
2025-04-01T04:36:16.974268
{ "authors": [ "Karm" ], "repo": "web-servers/narayana-tomcat", "url": "https://github.com/web-servers/narayana-tomcat/pull/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
97875900
Font is not monospace on clean install Also can't figure out how to configure this? But surely it should be some kind of monospace by default anyway? I had this same issue, Go to settings, click on "Open Config Folder" Open the file /packages/term2/styles/term2.less Add the following between lines 7 and line 8, "font-family: monospace;" Currently, this plugin is taking the font-family of your configuration. Did you use non monospaced font for the code ? Can you tell us what font are you using in Atom preferences please ? Personally I didn't change any Atom preferences from default, I just installed term2 and it didn't have monospaced font so I fixed it. Can you check Atom Menu > Preferences > Settings tab > Editor Settings section > Font family and tell me what you have here please ? Ok I see that I have personally specified a font. I guess defaut value is empty like you said. I removed it and was able to reproduce the issue. I will be able to fix that easily by getting the user font || monospace.
gharchive/issue
2015-07-29T07:07:49
2025-04-01T04:36:17.026321
{ "authors": [ "MoOx", "jchamb2010", "voltrevo" ], "repo": "webBoxio/atom-term2", "url": "https://github.com/webBoxio/atom-term2/issues/167", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
614249740
Segmentation fault (core dumped) during license activation (for specific beta versions of unity) Describe the bug when I try to run the tests and building of my Unity project. It gives me a generic docker error. It has some Segmentation Fault errors but they don't seem to stop the job when they are raised. Screenshots logs_43.zip Hey @BlackDereker and thanks for your bug report. I'm not sure what the issue is caused by, do you have any idea? What have you tried? Could you test if you have the same issue with unity verison 2019.2.11f1? Looks like it gives an error when trying to connect to Unity's licensing system. I didn't test on version 2019.2.11f1 I am receiving the same error when running against version 2020.1.0b9. Here is my config: # This is a basic workflow to help you get started with Actions name: CI # Controls when the action will run. Triggers the workflow on push or pull request # events but only for the master branch on: push: branches: [ master ] pull_request: branches: [ master ] env: UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} jobs: build: name: Build my project ✨ runs-on: ubuntu-latest steps: # Checkout - name: Checkout repository uses: actions/[email protected] with: lfs: true # Cache - uses: actions/[email protected] with: path: Library key: Library # Build - name: Build project uses: webbertakken/[email protected] with: unityVersion: 2020.1.0b9 targetPlatform: StandaloneWindows64 versioning: None # Output - uses: actions/[email protected] with: name: Build path: build Thanks everyone for reporting the issues. Please keep posting the versions that suffer from this error. @KaneFreeman Can you also confirm that these errors are happening consistently (or only once per x runs)? It is happening on every build. I had the setup previously working on version 2019.3.8f1. This was the working config: # This is a basic workflow to help you get started with Actions name: CI # Controls when the action will run. Triggers the workflow on push or pull request # events but only for the master branch on: push: branches: [ master ] pull_request: branches: [ master ] env: UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} jobs: build: name: Build my project ✨ runs-on: ubuntu-latest steps: # Checkout - name: Checkout repository uses: actions/checkout@v2 with: lfs: true # Cache - uses: actions/[email protected] with: path: Library key: Library # Test - name: Run tests uses: webbertakken/[email protected] with: unityVersion: 2019.3.8f1 # Build - name: Build project uses: webbertakken/[email protected] with: unityVersion: 2019.3.8f1 targetPlatform: StandaloneWindows64 # Output - uses: actions/upload-artifact@v1 with: name: Build path: build @KaneFreeman having compared your two workflows, there are actually a lot of changes, which doesn't really help narrow it down to for example a specific version of unity-builder or of unity (see picture below) Did you also update the license to one from 2020.1.0b9 or is it crashing because you're using a license that still has an old format? Yes I am working to test it more incrementally. I did update the license. I was able to get the error again just by updating the version. Here is the config: # This is a basic workflow to help you get started with Actions name: CI # Controls when the action will run. Triggers the workflow on push or pull request # events but only for the master branch on: push: branches: [ master ] pull_request: branches: [ master ] env: UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} jobs: build: name: Build my project ✨ runs-on: ubuntu-latest steps: # Checkout - name: Checkout repository uses: actions/checkout@v2 with: lfs: true # Cache - uses: actions/[email protected] with: path: Library key: Library # Test - name: Run tests uses: webbertakken/[email protected] with: unityVersion: 2020.1.0b9 # Build - name: Build project uses: webbertakken/[email protected] with: unityVersion: 2020.1.0b9 targetPlatform: StandaloneWindows64 # Output - uses: actions/upload-artifact@v1 with: name: Build path: build I can confirm my project gets the same segfault error when using 2020.1.0b9, and builds fine with no issues using 2019.3.8f1 and 2019.3.14f1. 2020.1.0b9 is beta, you should report this to unity as it could definitely be on their side. :v: Hi friends. I fixed this issue by removing calls to xvfb and instead using the -nographics flag. That got my activation working properly in Docker. I'm not sure if the Github Actions use xvfb, but I'm betting they do, given that the behavior is the same. I wrote up our fixes here: https://johnaustin.io/articles/2020/running-unity-20201-in-docker Hey @Kleptine, thanks for posting a possible fix! We've been talking about using that flag in https://github.com/webbertakken/unity-actions/issues/84 as well, and unfortunately it's not a good fix as it disables features in unity (hence we were using xvfb in the first place).
gharchive/issue
2020-05-07T18:14:48
2025-04-01T04:36:17.037255
{ "authors": [ "BlackDereker", "GabLeRoux", "KaneFreeman", "Kleptine", "russelljahn", "webbertakken" ], "repo": "webbertakken/unity-actions", "url": "https://github.com/webbertakken/unity-actions/issues/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2593819289
Added claude 3.5 sonnet and perplexity with llama model to curated ai… Added claude 3.5 sonnet model and perplexity with llama large 8b instruct model to the curated list of models I have updated the file structure in the PR with the handler moved and also removing the try catch from the sonnet file making it not suppress the error You added axios without using it somewhere. Is that a mistake or did you do it on purpose and if the latter why? I added axios because perplexity doesn't offer any sdk or package for using the models and it was only available through the api call and i think axios is the way to go for calling api's rather the fetch method where you need to do the heavy lifting work and axois provides things out of box which normal way of calling api's using fetch doesn't If axios doesn't work out for us. i can do the normal standard way using the fetch too Sure will change it to fetch from axios @krishkalaria12 btw are you in discord? @webdevcody my discord username is krishkalaria12
gharchive/pull-request
2024-10-17T07:01:21
2025-04-01T04:36:17.135893
{ "authors": [ "FleetAdmiralJakob", "krishkalaria12", "webdevcody" ], "repo": "webdevcody/survive-the-night-sim", "url": "https://github.com/webdevcody/survive-the-night-sim/pull/62", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
104208430
Reverse proxy didn't try to request website if the first attempt fails Problem Consider the following scenario: You start vagrant-development Forget to start your docker Request a http://foo.vm/ Start docker containers vagrant-development reverse proxy throws 503 even if you force-refresh quite a few times in browser. After force-refreshing (quite agressive 10-15 times) the reverse proxy tries to request the website from docker container Is there any treshold time or request treshold which prevents the reverse proxy to request website from the docker container? Docker containers need some seconds to be up (see docker-compose logs). Reverse proxy should get a running connection to docker containers when they're up. Please check if docker container were fully running - php-fpm need some times due to runtime provisioning. ok thx, will check Thx for your quick answer! That was it
gharchive/issue
2015-09-01T07:58:29
2025-04-01T04:36:17.150782
{ "authors": [ "jousch", "mblaschke" ], "repo": "webdevops/vagrant-development", "url": "https://github.com/webdevops/vagrant-development/issues/64", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1323256780
File manager - Don't open a new tab everytime the File manager opens If i have a directory open And I then click on the module preferences I then click on return to file listing Now i suddenly have two tabs open This will also happen if i leave the file manager and click on it again in the left menu. I have discussed this with you before, but it seems that the behavior is still the same 😐 If i want a new tab i can easy click on the plus sign If i set Restore previously used tabs on initial load to No i will lose the path if i leave the file manager and want to go back again. I will even lose that path if i open module preferences 😒 If i leave and return i soon have 10 tabs open if i don't close them and i will keep my old path there so setting Restore previously used tabs on initial load to No is not an option. Why create a new tab everytime i open file manager? It make no sense to me. There must others out there that think this is annoying. I can't be the only one? Hello, Thanks for the heads up. However, you won't have 10 tabs opened if you go back and forth as you described. File Manager always makes sure that the tab which is requested in URL (i.e. index.cgi?path=/home/user/public_html) is always opened (for root user, if missing it will open /). If the tab is stored in history it won't be restored twice. It opens a new tab everytime i open the File manager from the left menu. Okay, keep these 5 tabs opened as on the screenshot. Now go to the file manager preferences or any other module and then comeback - which tabs do you have opened then? Yes, i know that if i have one tab open in the root it won't open a new one. What do you want me to do!? 🙂 Those tabs are user dependent .. and host dependent .. What do you want me to do then!? 🙂 Not open a new tab when one tab is already open (regardless what folder i have open). Is the opened tab is part of filemin/index.cgi?path=/path/is/here URL? It looks like this: https://xxx.xxx.xxx.xxx:10000/filemin/?xnavigation=1 That link is called from the navigation menu. When path isn't defined it will open /. We cannot compare the functionality with desktop environment (Windows or mac OS) when you reopen a file manager. However, the functionality resembles the desktops - when you right click a folder to open it, and a file manager window opens with requested directory. This is familiar to what we're doing in Webmin. If we change it as you ask, then when a user clicks the link when path defined, which directories will be opened? I don't really understand how you can remember a tab and open that again, but at the same time you need to open a second tab that will open at /? The tab that have another path than / must be stored somewhere? Why isn't is possible to only open that tab instead of a new tab at /? Maybe don't open any tab at all and let the user press the + button for a new tab if no other tab is open? If a tab is already open, it will use that regardless the path the tab is in. I'm not the expert here, you are 😃 The tab that have another path than / must be stored somewhere? That's the point - no. It gets opened natively, on the initial call from the menu link path=, the same way it would have been done in the old theme. Why isn't is possible to only open that tab instead of a new tab at /? Like to accommodate initial design, we could store the last active directory on the navigation menu link path=, and next time you visited it, it would reopened it .. It's easy to fix .. Also, it wouldn't work for Virtualmin, that same way as we need to open a defined home directory. Maybe don't open any tab at all and let the user press the + button for a new tab if no other tab is open? Nah, that's unintuitive for the web app. Alright, good news! Your wish will come true, as latest commit adds ability to preserve previously visited directory. You are welcome to try it already by installing latest devel version of the theme. Also, I will add more handlers to accommodate links opened from favorites and when returning from File Manager preferences page. Alright, all fixed and ready to try. Enjoy! Yes, it is working 😍
gharchive/issue
2022-07-30T18:30:22
2025-04-01T04:36:17.352853
{ "authors": [ "Sopor", "iliajie" ], "repo": "webmin/authentic-theme", "url": "https://github.com/webmin/authentic-theme/issues/1613", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2128416023
Need for clarification Hello, I must say your representation is a bit hard to follow for a novice :D When you look at this page : https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2024_01_25/index.html?load=jpegxl_vs_avif.json#ref=AVIF+speed+6 avif seems to have the lead over jpegxl at every efforts When you look at this one : https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2024_01_25/index.html?load=all.json#WebP+method+4-quality=95..95&matcher_ssim=off The tendency is flipped, with jpegxl taking the lead ? Could you explain how to use your site to have a good idea on the relative performance of both codec ? Also, it is difficult to get an idea of the quality selected as reference, perhaps a link to an image at this quality could help showing the level of distortion obtained (I guess < 0.1bpp can not seriously be used for display) Thank you for the feedback! I must say your representation is a bit hard to follow for a novice :D I agree and I heard that multiple times already. I will try to improve that side of the framework at some point. When you look at this page : https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2024_01_25/index.html?load=jpegxl_vs_avif.json#ref=AVIF+speed+6 avif seems to have the lead over jpegxl at every efforts When you look at this one : https://storage.googleapis.com/demos.webmproject.org/webp/cmp/2024_01_25/index.html?load=all.json#WebP+method+4-quality=95..95&matcher_ssim=off The tendency is flipped, with jpegxl taking the lead ? You are correct. As for any data visualization tool, the results can greatly vary depending on the constraints applied to the comparison. One can easily skew the findings in one or another way, intentionally or inadvertently. I noticed that your second plot only has 2521 comparisons, which may signal low confidence in the results due to the lack of data points. Could you explain how to use your site to have a good idea on the relative performance of both codec ? Sure, if you specify relative to what. The goal of this framework is not to prove that "codec X is better". The purpose is showing that "codec X is A% faster to encode, B% slower to decode, generates files C% smaller than codec Y, for the images in this data set with the same metric value, on this quality range etc." Having stronger corpus or environment assumptions could be misleading. I tried to minimize any bias in the default settings: Two modern codecs AVIF and JPEG XL, both compared to WebP which has been stable in both format and implementation for years now. The effort/speed/method encoding settings (meaning how much time/CPU the user would like to spend for encoding an image) is the same as the default in the binaries of the main codec implementations (cwebp, avifenc, cjxl). The full input quality setting ranges are used. Comparing bytes and timings is easier to reason with: "twice faster" and "20% smaller" are more accessible than "better looking by 2dB". That is why comparisons are made for the same visual quality, according to some objective metric. There is no consensus on the objective metric to rely on. SSIM favors AVIF and SSIMULACRA2 favors JPEG XL. To avoid picking a side, I enabled both by default. Unfortunately this approach reduces the total number of comparisons and probably skews the plot in its own way, but this is somewhat compensated by the fact that the interactive tool allows for easy tuning of matchers and metrics to fit one's needs or convictions. Subjective visual quality evaluation (aka user surveys) could settle the debate for a given corpus and settings. Unfortunately this is rather involved. Also, it is difficult to get an idea of the quality selected as reference, perhaps a link to an image at this quality could help showing the level of distortion obtained This heavily depends on the image unfortunately. I agree that showing one image at a given quality setting or bpp would give a sense of the subjective visual quality loss, but it would also be misleading for other images that may require a different amount of quantization to achieve the same "loss" (if such a definition exists). In codec-compare one can see the images by clicking on a small dot, scrolling down and clicking on the image below. I was thinking about making this more visible. I guess < 0.1bpp can not seriously be used for display From the <0.1bpp plot: Top left point: bpp < 0.03 Right-most point: bpp < 0.03 Not super crisp, but can be displayed as a background for example. Which SSIM code did you use ? It is written here: libwebp2 quality metrics command Command line to get libwebp2 quality metrics PSNR and SSIM: git clone https://chromium.googlesource.com/codecs/libwebp2 cd libwebp2 git checkout e0c6533 cd .. cmake -S libwebp2 -B libwebp2/build -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ cmake --build libwebp2/build --parallel libwebp2/build/get_disto -ssim ${encoded_name}.png ${original_name} So it would be https://chromium.googlesource.com/codecs/libwebp2/+/e0c6533107649063c236b5666f2cc4c10b9b7591/src/enc/distortion_enc.cc#418 I agree there is no consensus about the metric, yet ssimulacra2 is the only one which was fitted on subjective data (which does not make it perfect everywhere but still) The issue with picking any single metric by default is that it is unfair to all codecs that do not optimize for that in their Rate-Distortion optimization. libavif uses aom's SSIM by default but allows for PSNR to be used. libwebp uses PSNR for RD-opt. Do you know if libjxl supports tuning the metric used for RD-opt? It uses SSIMULACRA2.1 by default right? I'm not sure but I believe it uses the butteraugli metric as an internal target (hence a clear focus on high quality compression). If you want to know more I invite you to jump on the discord :) ( link here: https://jpegxl.info/ ) Many core developers are there, even some from google (Zürich office) and even one which worked on the webp lossless codec :) and they all enjoy technical discussions The 2024_03_25 batch uses SSIM and Butteraugli as discussed above. Feel free to reopen this thread if necessary.
gharchive/issue
2024-02-10T13:13:04
2025-04-01T04:36:17.371475
{ "authors": [ "y-guyon", "yota-code" ], "repo": "webmproject/codec-compare", "url": "https://github.com/webmproject/codec-compare/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }