repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
jupyterlab/jupyterlab
934505349
Title: String variable not displayed in debugger variable tree view Question: username_0: ## Reproduce <!--Describe step-by-step instructions to reproduce the behavior--> See Gif above <!--Describe how you diagnosed the issue. See the guidelines at https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html --> ## Expected behavior <!--Describe what you expected to happen--> String is shown ## Context <!--Complete the following for context, and add any other relevant context--> - Operating System and version: Debian 11 - Browser and version: Firefox 88 - JupyterLab version: master<issue_closed> Status: Issue closed
xnimorz/use-debounce
828973159
Title: isPending Type '() => boolean' is not assignable to type 'boolean | null | undefined'. Question: username_0: I have a button `disabled` prop that accepts only `'boolean | null | undefined'` whilst the latest version of this library returns `isPending` with `() => boolean`. `disabled={debouncedCallback.isPending}` So, how to fix it?<issue_closed> Status: Issue closed
hazelcast/hazelcast-simulator
86460377
Title: [TEST-FAILURE] AgentRemoteServiceTest.testEcho Question: username_0: ``` java.net.BindException: Address already in use at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:382) at java.net.ServerSocket.bind(ServerSocket.java:375) at java.net.ServerSocket.<init>(ServerSocket.java:237) at com.hazelcast.simulator.agent.remoting.AgentRemoteService.start(AgentRemoteService.java:48) at com.hazelcast.simulator.agent.remoting.AgentRemoteServiceTest.setUp(AgentRemoteServiceTest.java:43) ``` https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-Simulator-sonar/com.hazelcast.simulator$simulator/165/testReport/junit/com.hazelcast.simulator.agent.remoting/AgentRemoteServiceTest/testEcho/ Status: Issue closed Answers: username_1: Should be fixed via https://github.com/hazelcast/hazelcast-simulator/commit/5999a09.
apache/apisix
722128811
Title: When will apisix 1.6 or the next version be released? Question: username_0: When will apisix 1.6 or the next version be released? Answers: username_1: We are working on some critical issues, once they are resolved (soon), we could begin to release 2.0. see milestones for more detail. https://github.com/apache/apisix/milestones Status: Issue closed
cossacklabs/themis
628692342
Title: Ios decryption using TSCellTokenEncryptedData is not working Question: username_0: **Describe the bug** We have integrated themis on our backend and mobile app. so encrypted data using the themis secure cell token protected method is coming from the API to mobile. when ios App try to decrypt that using the same masterkey that backend use and with give ciphertext and token library throwing the error **To Reproduce** Steps to reproduce the behavior: 1. Master key: <KEY> let hash = "abDoZIU=" let token = "<KEY> let cipherText = Data(base64Encoded: hash, options: .ignoreUnknownCharacters)! let cipherToken = Data(base64Encoded: token, options: .ignoreUnknownCharacters)! var encryptedMessage: TSCellTokenEncryptedData = TSCellTokenEncryptedData() encryptedMessage.cipherText = NSMutableData(data:cipherText) encryptedMessage.token = NSMutableData(data:cipherToken) do { let decryptedMessage: Data = try cellToken.unwrapData(encryptedMessage, context: nil) let resultString: String = String(data: decryptedMessage, encoding: .utf8)! print("decryptedMessage = \(resultString)") } catch let error as NSError { print(error.localizedDescription) print("Error occurred while decrypting \(error)", #function) return } 3. See the following error: ``` ThemisDemo/Pods/themis/src/themis/sym_enc_message.c:345 - error: themis_auth_sym_plain_decrypt(hdr->alg, derived_key, derived_key_length, iv, hdr->iv_length, in_context, in_context_length, encrypted_message, hdr->message_length, message, message_length, auth_tag, hdr->auth_tag_length) Error occurred while decrypting Error Domain=com.CossackLabs.Themis.ErrorDomain Code=11 "Secure Cell (Token Protect) decryption failed" UserInfo={NSLocalizedDescription=Secure Cell (Token Protect) decryption failed} decryptData() ``` **Expected behavior** It should be decrypted if we send base 64 decoded data to TSCellTokenEncryptedData as we are able to decrpyt the same data on Android, Node.js and https://docs.cossacklabs.com/simulator/data-cell/ **Environment (please complete the following information):** - OS: [latest ios] - Hardware: [64-bit, or iPhone 11 Max SIMULATOR] - Themis version: 0.12.2 - Installation way: - cocoapod **Additional context** Add any other relevant context for the problem here. Share an example project, if you can. Answers: username_1: Hi @username_0! Thanks for reporting an issue. The issue exists, but I believe it’s not with SwiftThemis but with our documentation for Themis Interactive Simulator. ### Why the code does not work In Swift code you have to decode the key into `Data`: ```swift let masterKey = Data(base64Encoded: "<KEY> options: .ignoreUnknownCharacters)! let cellToken = TSCellToken(key: masterKey)! ``` because `TSCellToken` accepts only `Data`. However, the interactive simulator does not decode the key as base64. Instead it interprets it as UTF-8 text. When you input the key like this into the simulator: <img width="889" alt="Simulator screenshot" src="https://user-images.githubusercontent.com/1256587/83452903-3586d300-a462-11ea-91b7-0bd6d9fa1a91.png"> The _actual_ key that is used for encryption is not ```swift let masterKey = Data(base64Encoded: "<KEY> options: .ignoreUnknownCharacters)! ``` but rather ```swift let masterKey = "<KEY>data(using: .utf8)! ``` Effectively, the data was encrypted with a key different from the one used for decryption, which causes the decryption to fail. ### What code works successfully If you use the same key as the simulator, the incorrectly decoded key, then the data can be decrypted successfully: ```swift let masterKey = "<KEY> let hash = "abDoZIU=" let token = "<KEY> let cipherText = Data(base64Encoded: hash, options: .ignoreUnknownCharacters)! let cipherToken = Data(base64Encoded: token, options: .ignoreUnknownCharacters)! let encryptedMessage = TSCellTokenEncryptedData() encryptedMessage.cipherText = NSMutableData(data: cipherText) encryptedMessage.token = NSMutableData(data: cipherToken) let correctKey = Data(base64Encoded: masterKey, options: .ignoreUnknownCharacters)! let cellTokenDoesNotWork = TSCellToken(key: correctKey)! let decryptedMessageFail = try? cellTokenDoesNotWork.unwrapData(encryptedMessage, context: nil) let nilMessage = "<nil>".data(using: .utf8)! print("decryptedMessage = \(String(data: decryptedMessageFail ?? nilMessage, encoding: .utf8)!)") // prints "decryptedMessage = <nil>" let simulatorKey = masterKey.data(using: .utf8)! let cellTokenThisWorks = TSCellToken(key: simulatorKey)! let decryptedMessageOk = try? cellTokenThisWorks.unwrapData(encryptedMessage, context: nil) print("decryptedMessage = \(String(data: decryptedMessageOk ?? nilMessage, encoding: .utf8)!)") // prints "decryptedMessage = Hello" ``` ### What are the next steps You cannot use base64-encoded encryption keys with the interactive simulator at the moment. The key used with the interactive simulator must be encoded in UTF-8 when testing the output against the application code. Could you please verify that using UTF-8 works for you? username_0: @username_1 you are good, it worked now Status: Issue closed
openssl/openssl
860832335
Title: OpenSSL 1.1.1j a fatal error in handshaking with memory bios Question: username_0: Hi , I am using OpenSSL 1.1.1j, Now i have met a problem when using openssl with memory bios. The TLS handshake hops 4 times between client and server as shown in the snippet below: 1. [client -> server]: ClientHello 2. [server -> client]: ServerHello,Certificate*,ServerKeyExchange*, ... 3. [client -> server]: Certificate*,ClientKeyExchange,CertificateVerify*,... | client handshake finished 4. [server -> client]: [ChangeCipherSpec] | server handshake finished 5. ------ Application Data <-------> Application Data ---------- In my client project, I use one work thread for sending and another for receiving. After step 3, handshaking of client is finished (SSL_is_init_finished() return 1), then sending thread begin send application data(SSL_write,BIO_read), and receiving thread got the last handshake data (BIO_write,and SSL_read) together. At this point, BIO_read will return -1 in sending thread, SSL_get_error() return the SSL_ERROR_SYSCALL. I guess when openssl is processing the last handshake data (step 4) and sending application data in another thread together is not permitted. Is there any api can get the corrected handshake status for client when it complete the step 4. Does anyone know about the problem? I really need a solution. Thanks in advance! Answers: username_1: @username_0 If SSL_is_init_finished() return 1 both in sending's thread and receing's thread, The SSL_write will not be able to do BIO_read, because s->statem.in_init eq 0, SSL_write will not do s->handshake_func, The SSL_read is similar. username_0: @username_1 Thanks for your reply! I have tested my project for thousands of times, it is always crashed or return error in ssl. I got the answer from https://www.openssl.org/docs/faq.html#PROG "an SSL connection cannot be used concurrently by multiple threads." Then I used one std::mutex lock the ssl object in the sending thread and receving thread, It is ok now. Thank you! Status: Issue closed username_0: Hi , I am using OpenSSL 1.1.1j, Now i have met a problem when using openssl with memory bios. The TLS handshake hops 4 times between client and server as shown in the snippet below: 1. <client -> server>: ClientHello 2. <server -> client>: ServerHello,Certificate*,ServerKeyExchange*, ... 3. <client -> server>: Certificate*,ClientKeyExchange,CertificateVerify*,... | client handshake finished 4. <server -> client>: [ChangeCipherSpec] | server handshake finished 5. ------ Application Data <-------> Application Data ---------- In my client project, I use one work thread for sending and another for receiving. After step 3, handshaking of client is finished (SSL_is_init_finished() return 1), then sending thread begin send application data(SSL_write,BIO_read), and receiving thread got the last handshake data (BIO_write,and SSL_read) together. At this point, BIO_read will return -1 in sending thread, SSL_get_error() return the SSL_ERROR_SYSCALL. I guess when openssl is processing the last handshake data (step 4) and sending application data in another thread together is not permitted. Is there any api can get the corrected handshake status for client when it complete the step 4. Does anyone know about the problem? I really need a solution. Thanks in advance! Status: Issue closed
OfTheWolf/UBottomSheet
784414493
Title: how to set the top hight of the bottomSheet? Question: username_0: I want to set it close to the superView but it shows only on 80% of the screen. Help me here, please? Answers: username_1: @username_0 By default sheet min max positions were set to 0.2 to 0.7 of the superview height. So you should use custom data source instead. See [custom data source](https://github.com/username_1/UBottomSheet/blob/master/Example/UBottomSheet/DataSource/MyDataSource.swift ) example. See [UBottomSheetCoordinatorDataSource.swift#L38](https://github.com/username_1/UBottomSheet/blob/master/Sources/UBottomSheet/Classes/UBottomSheetCoordinatorDataSource.swift#L38) ``` ///Default data source implementation extension UBottomSheetCoordinatorDataSource { public func sheetPositions(_ availableHeight: CGFloat) -> [CGFloat] { return [0.2, 0.7].map { $0 * availableHeight } } ``` Status: Issue closed
SpongePowered/SpongeForge
421047998
Title: Items in NuclearCraft machines getting dropped Question: username_0: **I am currently running** <!-- If you don't use the latest version, please tell us why. --> - SpongeForge version: spongeforge-1.12.2-2768-7.1.6-RC3626 - Forge version: forge-1.12.2-14.23.5.2781 <!-- Please include ALL mods/plugins you had installed when your issue happened, you can get a list of your mods and plugins by running "/sponge plugins" and/or "/sponge mods" --> - Plugins/Mods: <!-- Please include as much information as possible. For the description, assume we have no idea how mods work, be as detailed as possible and include a step by step reproduction. It is recommended you try to reproduce the issue you are having yourself with as few mods as possible. --> **Issue Description** Updated sponge from RC3592 to RC3626 and now items are being instantly dropped when trying to place them into a NuclearCraft inventory. NuclearCraft-2.13e--1.12.2 To reproduce: 1. Place down a Isotope Seperator and power it 2. Put things like Uranium Ingots in it <!-- Please provide a *full* server log (and crash-report if applicable). Go to https://gist.github.com/ and upload them there, then paste the resulting link here! Don't use hastebin/pastebin or other similar sites, as they have a history of quickly deleting files before we can look at them. --> Answers: username_1: Downgrade to 3616 until a fix is in place. username_2: Can you re-test with the following jar? https://www.mediafire.com/file/9o40kii7x9drz1z/spongeforge-1.12.2-2768-7.1.6-RC0.jar Status: Issue closed username_0: I'm unable to reproduce the issue in that build. username_3: Due to other bugs in SpongeForge, I can't run versions that have this fix. I'm trying to figure out how far I have to downgrade for it to work. The bug reproduces in spongeforge-1.12.2-2768-7.1.6-RC3621.jar. username_3: The bug does not reproduce in spongeforge-1.12.2-2768-7.1.6-RC3616.jar. username_2: Does not change that this issue is actually already fixed. Bugs in newer builds are being fixed regardless.
quarkusio/quarkus
853254444
Title: Remote dev mode doesn't recognise changes to static resources Question: username_0: ## Describe the bug Using remote dev mode any changes to static resources under the META-INF/resources folder doesn't trigger the update like with local dev mode ### Expected behavior Changes to static resources trigger the update when a new http request is made ### Actual behavior Changes to static resources are ignored when a new http request is made ## To Reproduce *) Install Microk8s and enable the registry (or any equivalent Kubernetes environment) *) Created a new fresh Quarkus project with the maven plugin with just resteasy dependency and the rest endpoint generated automatically *) mvn clean package -> the app is correctly deployed on Kubernetes *) mvn quarkus:remote-dev -Dquarkus.live-reload.url=http://10.152.183.197:8080/ -> remote dev is active 1) Change the label "Hello RESTEasy" to "Hello RESTEasy!!" in the rest service java source code -> curl -> OK 2021-04-01 23:33:18,987 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending dev/app/com/username_0/RemoteService.class 2021-04-01 23:33:18,989 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending app/demo-remote-dev-1.0.0-SNAPSHOT.jar 2021-04-01 23:33:18,991 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending quarkus-run.jar 2021-04-01 23:33:18,992 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending lib/deployment/build-system.properties 2) Change the label to "Your new Cloud-Native application is ready!!!!" in the index.html page under the resource folder -> curl -> NOTHING happens 3) Change the label back to "Hello RESTEasy" in the rest service java source code -> curl -> OK, the index.html page is also reloaded 2021-04-01 23:34:28,603 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending dev/app/com/username_0/RemoteService.class 2021-04-01 23:34:28,605 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending app/demo-remote-dev-1.0.0-SNAPSHOT.jar 2021-04-01 23:34:28,607 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending dev/app/META-INF/resources/index.html 2021-04-01 23:34:28,608 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending quarkus-run.jar 2021-04-01 23:34:28,610 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending lib/deployment/build-system.properties Repeating the same process with local dev mode all 3 points are executed perfectly. ### Configuration ```properties # Add your application.properties here, if applicable. quarkus.http.port=8080 quarkus.package.type=mutable-jar quarkus.live-reload.password=<PASSWORD> quarkus.live-reload.url=http://10.152.183.197:8080/ quarkus.container-image.build=true quarkus.container-image.push=true quarkus.container-image.registry=localhost:32000 quarkus.container-image.insecure=true quarkus.kubernetes.namespace=mydev quarkus.kubernetes.resources.requests.memory=256Mi quarkus.kubernetes.resources.requests.cpu=250m quarkus.kubernetes.resources.limits.memory=512Mi quarkus.kubernetes.resources.limits.cpu=1000m quarkus.kubernetes.deploy=true quarkus.kubernetes.env.vars.quarkus-launch-devmode=true [Truncated] ## Environment (please complete the following information): ### Output of `uname -a` or `ver` Linux dfbook 5.11.0-13-generic #14-Ubuntu SMP Fri Mar 19 16:55:27 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux ### Output of `java -version` openjdk version "11.0.10" 2021-01-19 OpenJDK Runtime Environment GraalVM CE 172.16.58.3 (build 11.0.10+8-jvmci-21.0-b06) OpenJDK 64-Bit Server VM GraalVM CE 172.16.58.3 (build 11.0.10+8-jvmci-21.0-b06, mixed mode, sharing) ### Quarkus version or git rev 1.13.0.Final ### Build tool (ie. output of `mvnw --version` or `gradlew --version`) Apache Maven 3.6.3 Maven home: /usr/share/maven Java version: 11.0.10, vendor: GraalVM Community, runtime: /home/adestasio/Apps/graalvm-ce-java11-21.0.0.2 Default locale: it_IT, platform encoding: UTF-8 OS name: "linux", version: "5.11.0-13-generic", arch: "amd64", family: "unix" Answers: username_1: I just tried this and it worked like a charm username_0: Hi Georgios, I don't know what to say, I've just tried and it doesn't work to me. Changing index.html content and then making a request with curl nothing happens. If I change the string returned by the rest service then the update and reload of content is triggered. username_1: Is this still an issue?
weibeld/heroku-buildpack-graphviz
833779023
Title: Can't create image on Heroku Question: username_0: Hi, First, thank you for your buildplack! What is the best way to generate an image with GraphViz, then display it? How it will look if we use it through tmp directory? Cheers Answers: username_1: You can generate an image, for example, by directly invoking one of the Graphviz commands (see [here](https://graphviz.org/documentation/)) from your application. Then how you display it, depends entirely on the type of application you're running, e.g. a web application. In general, after you created an image, you can display it like any other image.
pyca/cryptography
110878935
Title: cryptography.x509 encodes countryName as a Utf8String but it should be a PrintableString Question: username_0: [RFC 5280](https://www.ietf.org/rfc/rfc5280.txt) says that countryName should be as above, and unlike the other elements there (localeName etc.) cannot be a DirectoryString. The consequences are that while it seems like `openssl s_server` is OK with certificates generated in this way, golang's (seemingly correct implementation which follows the RFC) cannot parse the certificate. @reaperhulk says https://github.com/pyca/cryptography/blob/master/src/cryptography/hazmat/backends/openssl/backend.py#L114 is what should be changed.<issue_closed> Status: Issue closed
PasaiakoUdala/zerbikat
190925556
Title: Miranda - menua azpisailka Question: username_0: Menua **familia/azpisailka** erakusten du, ondo dago. Baina agian barne erabilerarako egokia izango da ere aukera egotea **sail/azpisail** sailkapen moduan erakusteko aukera egotea, departamentuei beraien fitxak begiratzeko edo errebisatzeko esateko adibidez, denak multzo berean izateko.<issue_closed> Status: Issue closed
JoelBender/bacpypes
415535953
Title: Read object list Question: username_0: I run whois than i get the "remotestation " instead of "Address". I try to readobject of remotestation device but i can not read it. please give me solution how can i read the object of remotestation device. Thank you. ![read_object](https://user-images.githubusercontent.com/45874305/53557796-b4f56a80-3b6c-11e9-80bd-47ebfbc01082.png) Answers: username_1: Will return the object name. username_2: Hello Joel, We are using a Linux terminal to run the script as suggested. ![read](https://user-images.githubusercontent.com/31723078/53788435-d200b380-3f47-11e9-96be-dac4b616fd79.png) After running the script there is no response back. Are we missing something? Neel. username_0: Hello, We tried once again and looks like we got a response but just once and then the terminal is stuck waiting for a response. We tried couple of times after that and we are still not getting a response back. Let us know what we are missing. Firmware Team <EMAIL> > username_1: There is something amiss with your configuration. If you can run Wireshark (with a capture filter on just BACnet traffic `udp port 47808`) and send it to me, along with your INI configuration file and output from `$ ifconfig` or `$ ip addr show` then I can probably figure out what's going on. username_2: Hi Joel, Here are some conclusions that we have drawn. We tried running the ObjectName request for 4 different devices. ![test2](https://user-images.githubusercontent.com/31723078/53881230-220b7300-4039-11e9-998b-21d844226c6f.png) At the fourth device call the code gets stuck waiting for response. The line number is #74 in the file https://github.com/username_1/bacpypes/blob/master/samples/ReadProperty.py The same devices when called in a different order will respond for the first few times but not after that. ![test1](https://user-images.githubusercontent.com/31723078/53881292-52eba800-4039-11e9-847b-007446834ef8.png) Here is the requested BACpypes.INI file ![bacini](https://user-images.githubusercontent.com/31723078/53881338-70b90d00-4039-11e9-93bf-e183ba99e94f.png) and the output of the ifconfig file ![screenshot from 2019-03-06 17-55-03](https://user-images.githubusercontent.com/31723078/53881373-8b8b8180-4039-11e9-8b63-80a179bc021e.png) Please suggest your thoughts. Thanks username_1: I suspect this is because there is a mismatch between the content in the request and the response because they are running in separate threads. The console is calling the application `request_io()` in its thread but it should actually be called in the main thread (which is the primary thread in the sample applications). Please pull the `stage` branch and run the application again. Thank you for your patience and persistence. username_2: Hi Joel, Thank you for your suggestion on the threading. We got that working just fine now. Such said, we are now faced with another challenge. Reading objects of the devices. <img width="1498" alt="Screenshot 2019-03-13 at 11 24 08 AM" src="https://user-images.githubusercontent.com/31723078/54256976-cafb2600-4583-11e9-9fde-8a6bad9a071e.png"> <img width="1498" alt="Screenshot 2019-03-13 at 11 24 54 AM" src="https://user-images.githubusercontent.com/31723078/54257608-e8c98a80-4585-11e9-9519-c9f45d0d0180.png"> For the MSTP/IP router we are NOT able to get the object list. Same is the case with Device ID 2000. See screenshot below: ![Screenshot from 2019-03-13 11-42-08](https://user-images.githubusercontent.com/31723078/54257650-039bff00-4586-11e9-9de1-0c390c1c5bb3.png) Please advice. Thanks username_3: No activity for a long time. If help still required, please reopen Status: Issue closed
tromp/ChessPositionRanking
970978726
Title: 1bn2N1n/Q1B1PB2/qq2b3/1k2K1b1/R4B1r/Qpp1N3/P1q2rp1/Q2R4 w - - 0 1 Question: username_0: 1bn2N1n/Q1B1PB2/qq2b3/1k2K1b1/R4B1r/Qpp1N3/P1q2rp1/Q2R4 w - - 0 1 ![img](http://www.fen-to-image.com/image/36/1bn2N1n/Q1B1PB2/qq2b3/1k2K1b1/R4B1r/Qpp1N3/P1q2rp1/Q2R4) No Checks wx 3 wp 2 wpr 3 wpx 3 maxuwp 4 minopp -2 bx 2 bp 3 bpr 3 bpx 2 maxubp 5 minopp -2 Answers: username_1: `1. b4 a5 2. bxa5 d5 3. c4 dxc4 4. g4 h5 5. gxh5 e5 6. f4 exf4 7. Qc2 Qd3 8. Qd1 Qc2 9. Bb2 Be6 10. Qc1 Bd6 11. Be5 Kd7 12. Qa3 Kc6 13. Kf2 Kb5 14. Kf3 Nc6 15. Nh3 Rf8 16. Ng5 Na7 17. Nh7 Nc8 18. a6 Nh6 19. a7 c3 20. a8=Q c5 21. Q8a7 Bb8 22. d4 c4 23. Nd2 f5 24. Rd1 Nf7 25. d5 g5 26. d6 b6 27. d7 g4+ 28. Kf2 f3 29. Bf4 g3+ 30. Ke3 f2 31. d8=B g2 32. h6 Qb1 33. Rg1 c2 34. Bdc7 c1=B 35. Kd4 c3 36. Nc4 Qc2 37. e4 Bd2 38. Be2 f1=Q 39. Bh5 Qf2+ 40. Ne3 Qe2 41. Bg3 f4 42. e5 f3 43. Bf4 f2 44. Rge1 Rhg8 45. Bg6 Nh8 46. h3 Rf6 47. Bf7 Rg4 48. Nf8 Rh4 49. h7 Kc6 50. Rb1 Qa6 51. Rb4 b5 52. Ra4 b4 53. Rd1 Kb5 54. Ng4 Bd5 55. e6 Bb3 56. e7 Be6 57. Bh2 Bg5 58. Bhf4 Ng6 59. Ne3 b3 60. h8=Q f1=Q 61. Qh7 Nh8 62. Ke5 Qc4 63. Qd3 Qb6 64. Qd2 Qcc6 65. Qdc1 Qa6 66. Qa1 Qcb6 67. Bg3 Rf2 68. Bf4 Rg4 69. h4 Rxh4` Status: Issue closed username_0: Verified. Thanks, Peter!
rifflearning/zenhub
474676785
Title: Chat: Refactoring for style and modularity, Second Half Question: username_0: This story is about refactoring and modularizing all the video chat code. Goals for (5/13 - 5/24) sprint: - Refactor the second half the code for all of video chat, but still with some (introduced) bugs. But basically, be all the way through cleaned up code that can be worked on by anybody. Goals for (5/27 - 6/7) sprint: - Clean up code for bugs and style and have a working version that is fully refactored. Be ready to start Phase 2, the packaging. Answers: username_0: Accepting as complete on @mlippert 's recommendation. No user-facing work to test or review.
PistonDevelopers/glium_graphics
90888438
Title: Implement `texture::Rgba8Texture` for `DrawTexture` Question: username_0: - [ ] Update `graphics` to 0.2.0 - [ ] Add dependency on piston-texture 0.2.1 - [ ] Implement `Rgba8Texture` Answers: username_0: @username_1 recommended to use `TextureAny`. username_1: Using a `TextureAny` is blocked on https://github.com/username_1/glium/issues/1035 username_0: We could implement `Rgba8Texture` for `DrawTexture` for now, and change to `TextureAny` later when Glium gets updated. Status: Issue closed
jantimon/html-webpack-plugin
618160701
Title: How does html-webpack-plugin work with vue Question: username_0: hi, i am new and using vue-cli 3 with webpack 4. If i implement the html-webpack-plugin in my vue.config.js the plugin is replacing my whole html body. I hosted the solution on my page: https://hoehensteigergames.com You can see that there is no body. I used the following configuration: module.exports = { transpileDependencies: ["vuetify"], outputDir: "X:\\ClientBuild", css: { extract: { ignoreOrder: true } }, configureWebpack: { entry: join(__dirname, "./src/main.ts"), output: { path: "X:\\ClientBuild" }, plugins: [ new BundleAnalyzerPlugin(), new HtmlWebpackPlugin({ title: "Höhensteiger Games", inject: true }), new ResourceHintWebpackPlugin(), new HtmlCriticalPlugin({ base: "X:\\ClientBuild", src: "index.html", dest: "index.html", inline: true, minify: true, extract: true, width: 375, height: 565, penthouse: { blockJSRequests: false } }) ] } }; May anyone can help me how to use/configure the plugin with vue? I also use Vuetify as UI Framework. Answers: username_1: What do you expect the html-webpack-plugin to generate? It looks like you are providing an empty template. Maybe this question would be better for stackoverflow. Status: Issue closed
bitshares/beet
916215286
Title: My ISP router DNS forwarder filters 127.0.0.1 by default Question: username_0: ``` $ nslookup local.get-beet.io 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: *** Can't find local.get-beet.io: No answer ``` No sure what the consequence is or how serious the issue is. I think it's a common issue. Everyone using the same type of device may encounter the same issue. Asking everyone to change their default DNS settings is not the solution IMHO. Answers: username_1: I'm not sure about this, but have you tried adding beet.bitshares.org 127.0.0.1 in hosts file in OS ? username_2: Is 192.168.0.1 the IP of your local machine in that case? username_3: Check if it does it for all localhost addresses or just 127.0.0.1....(i.e. 127.0.0.2 or 3 or 4 etc) username_0: 192.168.0.1 is my modem / router / WiFi AP with built-in DHCP server and DNS forwarder - the default settings. Interestingly, ``` $ nslookup localhost 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 Name: localhost Address: 127.0.0.1 $ nslookup localhost123 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: *** Can't find localhost123: No answer $ nslookup local.get-beet.io 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: *** Can't find local.get-beet.io: No answer $ nslookup loca.get-beet.io 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 ** server can't find loca.get-beet.io: NXDOMAIN $ nslookup get-beet.io 192.168.0.1 Server: 192.168.0.1 Address: 192.168.0.1#53 Non-authoritative answer: Name: get-beet.io Address: 192.168.3.11 Name: get-beet.io Address: 172.16.58.3 Name: get-beet.io Address: 172.16.58.3 Name: get-beet.io Address: 192.168.3.11 ``` username_1: I'm not quite sure why you still getting get-beet.io as response when we updated it for a new domain ... hmmm Have you tried lately to use it ? username_0: ``` $ nslookup beet.bitshares.org Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: *** Can't find beet.bitshares.org: No answer ``` As I've said, the point of this issue is not what domain name you are using in the code, but what the IP address is used.
sdi-sweden/geodataportalen
1049425470
Title: (1139, 'LMs tjänster läggs till i karta (fast den står och tuggar som den höll på) men EJ i lagerlistan!') Question: username_0: **2013-04-12T06:13:55.000+00:00** ****: Answers: username_0: **2013-04-12T06:16:07.000+00:00** ****: Rensa karta funkar ej heller... username_0: **2013-04-12T06:16:48.000+00:00** ****: Gäller även LFV username_0: **2013-04-19T08:38:02.000+00:00** ****: Det här beror på en bugg i lagerträdet som nu är åtgärdat, en fix kommer finnas med i nästa release som är den 2 maj, därefter ska tjänsterna visas korrekt i lagerträdet. username_0: **2013-04-19T08:43:44.000+00:00** ****: När jag testade ovantstående upptäckte jag ett fel i ett lager hos LFV's tjänst. Kan du Björn meddela Luftfartsverket att ett av deras lager i AIM tjänsten ("LFV Digital AIM WMS"), lagret med titel "Area Of Responsibility" inte fungerar som det ska? Vi får ett ServiceException när vi försöker visa det lagret.
JuliaDiff/ReverseDiff.jl
453743473
Title: Error when computing derivative of function involving a complex matrix Question: username_0: For example consider the following minimal program: function f4(x) abs( (Complex[1] * x)[1] ) end tp = InstructionTape() at = ReverseDiff.TrackedReal(0.41992711708322633, 0, tp) ot = f4(at) ReverseDiff.seed!(ot) ReverseDiff.reverse_pass!(tp) println(deriv(at)) TypeError: in TrackedReal, in V, expected V<:Real, got Type{Complex{Float64}} Stacktrace: [1] ReverseDiff.TrackedArray(::Array{Complex{Float64},1}, ::Array{Int64,1}, ::Array{ReverseDiff.AbstractInstruction,1}) at /Users/wmoses/.julia/packages/ReverseDiff/qmgw8/src/tracked.jl:86 [2] track(::Array{Complex{Float64},1}, ::Type{Int64}, ::Array{ReverseDiff.AbstractInstruction,1}) at /Users/wmoses/.julia/packages/ReverseDiff/qmgw8/src/tracked.jl:387 [3] broadcast_mul(::Array{Complex,1}, ::TrackedReal{Float64,Int64,Nothing}, ::Type{Int64}) at /Users/wmoses/.julia/packages/ReverseDiff/qmgw8/src/derivatives/elementwise.jl:421 [4] broadcast at /Users/wmoses/.julia/packages/ReverseDiff/qmgw8/src/derivatives/elementwise.jl:343 [inlined] [5] * at ./arraymath.jl:55 [inlined] [6] f4(::TrackedReal{Float64,Int64,Nothing}) at ./In[165]:2 [7] top-level scope at In[165]:7 On the other hand the following succeeds: function f4(x) abs( (Complex[x])[1] ) end tp = InstructionTape() at = ReverseDiff.TrackedReal(0.41992711708322633, 0, tp) ot = f4(at) ReverseDiff.seed!(ot) ReverseDiff.reverse_pass!(tp) println(deriv(at))
rkalis/truffle-plugin-verify
717173143
Title: Found duplicate SPDX-License-Identifiers in the Solidity code Question: username_0: Hi, I'm trying to verify my Solidity file, but it throws an error despite using the new `--license`` parameter: ``` C:\Projects\Galaxias\file-storage>truffle run verify --licence UNLICENSED Files --network rinkeby Verifying Files Found duplicate SPDX-License-Identifiers in the Solidity code, please provide the correct license with --license <license identifier> Failed to verify 1 contract(s): Files ``` I have removed the SPDX comment from the Files.sol file, but it uses an OZ file (which might use others), and that's where it could get it. My truffle-plugin-verify version is `0.4.0`. Answers: username_1: The flag should be `--license` instead of `--licence`. I'd also recommend putting all flags to the right side of `Files`, i.e.: ``` truffle run verify Files --network rinkeby --license UNLICENSED ``` Let me know if that works. username_0: Oops, I'm so ashamed! Worked perfectly, thanks! Status: Issue closed
marta-file-manager/marta-issues
245575761
Title: Add "wasInapplicable" to ActionHandler Question: username_0: Add `wasInapplicable` to `ActionHandler` in order to allow custom handlers to do something reasonable when the original action is inapplicable. For example, **Preview** is only available for local files, but some plugin can provide a fallback implementation which works also for other VFS. P.S.: The `wasInapplicable` name is random.<issue_closed> Status: Issue closed
SassConf/2015-austin-speaker-cfp
76483551
Title: Throw-Away Code Question: username_0: # Throw-Away Code ## Type of Presentation [ x ] Standard Length Talk [ ] Lightning Talk [ ] Workshop [ ] Moderated Discussion ## Description (required) Whether or not your team expects its designers to write production code, there are some huge benefits of having more overlap between your team’s design and development processes. Thinking about design in a modular, object-oriented way can also dramatically improve the experience felt by your end-user. I’ll take you on a journey with a UX team @ Amazon whose designers had never professionally pushed code, but were experiencing pain from the disconnect between the design and development processes. We’ll explore how teaching them to code, building a designing-in-browser dev environment with Middleman, and creating a resusable pattern library with SASS, OOCSS and BEM conventions allowed our team to smooth out an otherwise bumpy product development cycle. Other super fun things we’ll discuss working with: “media objects”, partials, plug and play animations, & my fav responsive grid system. ## Speaker Info (required) * Name : <NAME> * Location : Oakland, CA * Contact : <EMAIL> ## More Info (optional) ##### Social Media: * Twitter : [@username_0](https://twitter.com/username_0) * GitHub : [@username_0](https://github.com/username_0) * Url(s) : [hire.julieannhorvath.com](http://hire.julieannhorvath.com/) ##### Bio: I’m <NAME>, *a designer who codes*. With a background in writing, information design is at the heart of everything I build. I’m completely obsessed with CSS, my dog, and painting the perfect lip. I’m the creator of tech’s first all-women talk series, Passion Projects. I write things about design and development at [One Neat Trick](https://medium.com/one-neat-trick), a shiny new publication for bite-size tips re: web development and designing-in-browser, and I’m also the curator of [#UXSCHOOL](https://twitter.com/search?q=%23UXSCHOOL&src=typd), a series of tweets about well-designed user-experiences on the Internet. ##### Photo: ![Avatar](https://pbs.twimg.com/profile_images/589622444186468352/pKcp1Bpa.jpg)
sciencehistory/scihist_digicoll
639710786
Title: cap invoke:rake running on wrong server Question: username_0: We use a capistrano plugin to run remote rake tasks. We thought we had configured it to run on `jobs` server, but it seems to be running on web server. https://github.com/sciencehistory/scihist_digicoll/blob/5e8fdd398acf58ef18fe84358e5ea6c32bf11aae/config/deploy.rb#L80 And yet: ``` $ ./bin/cap staging invoke:rake TASK=scihist:solr:reindex Fetching servers from AWS EC2 tag lookup, from servers with tag:Application='scihist_digicoll' ... server '172.16.31.10', roles: :cron, :jobs # name: scihist_digicoll-jobs1-staging server '172.16.17.32', roles: :web, :app # name: scihist_digicoll-web1-staging server '172.16.17.32', roles: :solr # name: scihist_digicoll-solr1-staging 00:00 invoke:rake 01 bundle exec rake scihist:solr:reindex 01 01 Progress: | [...] ✔ 01 [email protected] 29.692s ``` Note it says it ran on `172.16.17.32`, which was the autodisocvered web/app server, not cron/jobs server. Hmm. Answers: username_0: I think it might be a bug in capistrano-rake. I think: https://github.com/sheharyarn/capistrano-rake/blob/8d9efe6405afaa1b0974f249bd63ca9cd2fdb97b/lib/capistrano/tasks/invoke.rake#L9 `on roles(rake_roles) ` should maybe really be `on roles(*rake_roles)` Have to figure out how to try out a fix. username_0: Aha, yeah, capistrano-rake is broken in a couple ways with this feature. Will PR there and link here. username_0: capistrano_rake is literally just this file: https://github.com/sheharyarn/capistrano-rake/blob/8d9efe6405afaa1b0974f249bd63ca9cd2fdb97b/lib/capistrano/tasks/invoke.rake I will submit a PR, but also we can easily copy the task locally and stop using the gem. It's nice to have the gem even for simple code to not have to reinvent the wheel, but if it's not doing what we need, easy enough to copy and modify. Another change we could make it making sure it only runs the rake task on ONE server, even if there are multiple jobs servers, which is what we want for our usage. username_0: https://github.com/sheharyarn/capistrano-rake/pull/7 username_1: Running on only one server seems like a good adjustment to make. Status: Issue closed
chingu-voyage7/Bears-Team-19
384286717
Title: Set up backend Answers: username_1: Hi, I'm setting up the backend, using the DB model described in http://username_1.info/db_docs To provide a REST API I'm using https://postgrest.org/, which exposes a complete RESTful API using the database model, so any change added to the schema will be reflected immediately in the API. You can access the API at http://api.username_1.info, I recommend to use Postman, and do request against the API. Making GET request as a anonymous user are allowed at the moment, but to do any other action (POST, DELETE, PUT, etc) you will need a JWT Token. Please send me your emails to invite you to join a Team in Postman where we can share API Requests as examples and to test them in order to organise the client requests workflow. Status: Issue closed
zalando/patroni
1121185454
Title: Patroni after losing 3 out of 5 nodes: What to do? Question: username_0: Hi all. Question is short: After losing 3 out of 5 members, quorum is lost and current leader cannot update the lock: ``` Feb 1 20:45:05 buster patroni[10346]: 2022-02-01 20:44:55,581 INFO: Lock owner: patroni-2; I am patroni-2 Feb 1 20:45:05 buster patroni[10346]: 2022-02-01 20:45:05,604 ERROR: failed to update leader lock Feb 1 20:45:05 buster patroni[10346]: 2022-02-01 20:45:05,610 ERROR: Failed to drop replication slot 'patroni_1' Feb 1 20:45:07 buster patroni[10346]: 2022-02-01 20:45:07,617 INFO: not promoting because failed to update leader lock in DCS Feb 1 20:45:07 buster patroni[10346]: 2022-02-01 20:45:07,618 WARNING: Loop time exceeded, rescheduling immediately. ``` Is there a way of making this cluster fully functional again, with only two nodes?<issue_closed> Status: Issue closed
Home-Is-Where-You-Hang-Your-Hack/sensor.goveetemp_bt_hci
726438713
Title: HA unable to find sensors Question: username_0: Hello! HA suddenly stopped updating the sensors about 48 hrs ago. Hoping you can help me find out why! Core logs are below. 2020-10-20 17:17:39 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:17:39 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:19:30 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:24:31 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:24:31 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U 2020-10-20 17:24:31 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name iPad 2020-10-20 17:24:31 WARNING (MainThread) [homeassistant.components.mobile_app.notify] Found duplicate device name SM-G988U [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. Client error on /homeassistant/restart request s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] udev.sh: executing... [17:25:30] INFO: Update udev information [cont-init.d] udev.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. 2020-10-20 17:25:51 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-10-20 17:25:51 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for smartthinq which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-10-20 17:26:01 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for wyzesense which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-10-20 17:26:02 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for govee_ble_hci which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant. 2020-10-20 17:26:04 ERROR (MainThread) [homeassistant.components.sensor] Error while setting up govee_ble_hci platform for sensor Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 193, in _async_setup_platform await asyncio.shield(task) File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/config/custom_components/govee_ble_hci/sensor.py", line 202, in setup_platform adapter = get_provider().get_adapter(int(config[CONF_HCI_DEVICE][-1])) File "/usr/local/lib/python3.8/site-packages/bleson/providers/linux/linux_provider.py", line 10, in get_adapter adapter.open() File "/usr/local/lib/python3.8/site-packages/bleson/providers/linux/linux_adapter.py", line 32, in open self._socket.bind((self.device_id,)) OSError: [Errno 19] No such device Answers: username_1: It was working then it stopped? How long was it working before that? Has anything changed? The errors suggest that the Bluetooth adapter cannot be connected to. If you have an external Bluetooth adapter or have connected something new the HCI device number may have changed. username_0: Hi @username_1, it was working for a few months at least when it stopped. To my knowledge nothing has changed hardware or software wise around the time it stopped. Any addons that I had installed recently, I uninstalled to test. No external BT adapters, using the onboard BT. No new USB or other connected devices either. Thanks! username_1: That appears to the log when Home Assistant is first starting but the Bluetooth HCI is not found. Try rebooting the physical device, some process may still be holding on to it. username_0: I rebooted via the HA interface earlier today (host reboot, not restart) and just pulled the plug for a few minutes, same thing appears. 020-10-21 11:14:34 ERROR (MainThread) [homeassistant.components.sensor] Error while setting up govee_ble_hci platform for sensor Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 193, in _async_setup_platform await asyncio.shield(task) File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/config/custom_components/govee_ble_hci/sensor.py", line 202, in setup_platform adapter = get_provider().get_adapter(int(config[CONF_HCI_DEVICE][-1])) File "/usr/local/lib/python3.8/site-packages/bleson/providers/linux/linux_provider.py", line 10, in get_adapter adapter.open() File "/usr/local/lib/python3.8/site-packages/bleson/providers/linux/linux_adapter.py", line 32, in open self._socket.bind((self.device_id,)) OSError: [Errno 19] No such device username_1: Do you have `hci_device` set in your config? username_0: I do not have it set username_1: You wouldn't be using a Raspberry PI 3 would you? The latest OS update apparently has a [Bluetooth issue](https://github.com/home-assistant/operating-system/issues/910) Status: Issue closed username_0: Thank you! I am running a Pi 3. Same issue there. That is likely root cause, we can close out this issue then. I appreciate the help! Sorry about the confusion on my side here. username_1: That is most likely your issue. If nothing else changed there is no reason for the Bluetooth adapter not to be found. No prob, it is just running down the list of things that can possibly go wrong...the OS breaking a fundamental protocol usually is not the first thing that is though of.
GSA/data.gov
129658648
Title: Error on SiteWide Search Question: username_0: 1. Goto catalog.data.gov and search for a term. 2. On search results page, following in displayed on top of the page: You are searching in the list of datasets. Show results in entire Data.gov site. 3. Click on "entire Data.gov site" link above 4. Error displayed. Seems to be akamai related. ![2016-01-28_22-39-23](https://cloud.githubusercontent.com/assets/5984939/12666675/0533afb8-c611-11e5-8690-acef38c40bd7.png) Answers: username_1: @username_0 issue should be resolved now. If you still notice the issue, can you post the URL you are checking. Status: Issue closed
flutter/flutter
422899240
Title: Flutter App fails to build when run through Xcode on iPhone XR Simulator. CodeSign fails. Question: username_0: Steps to Reproduce 1. Run iPhone XR Simulator through Xcode. 2. Access Runner.xcodeproj of app under ~/[ProjectName]/flutter/ios. 3. Run file. Note: Tried setting build system to Legacy off of a suggestion in StackExchange. This did not work. Logs <img width="1280" alt="Screen Shot 2019-03-19 at 2 29 31 PM" src="https://user-images.githubusercontent.com/45278651/54632320-334e7980-4a54-11e9-86d9-1b063ab571f8.png"> <img width="1021" alt="Screen Shot 2019-03-19 at 2 29 59 PM" src="https://user-images.githubusercontent.com/45278651/54632332-3ba6b480-4a54-11e9-9d3d-25b5465976e0.png"> Xcode log text follows: ``` CodeSign /Users/username_0/Documents/GitHub/Heatwav/flutter/build/ios/Debug-iphonesimulator/Runner.app cd /Users/username_0/Documents/GitHub/Heatwav/flutter/ios export CODESIGN_ALLOCATE=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/codesign_allocate export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" Signing Identity: "-" /usr/bin/codesign --force --sign - --entitlements /Users/username_0/Documents/GitHub/Heatwav/flutter/build/ios/Runner.build/Debug-iphonesimulator/Runner.build/Runner.app.xcent --timestamp=none /Users/username_0/Documents/GitHub/Heatwav/flutter/build/ios/Debug-iphonesimulator/Runner.app /Users/username_0/Documents/GitHub/Heatwav/flutter/build/ios/Debug-iphonesimulator/Runner.app: resource fork, Finder information, or similar detritus not allowed Command /usr/bin/codesign failed with exit code 1 ``` ``` Ians-MacBook-Pro:flutter username_0$ flutter run --verbose [ +32 ms] executing: [/Users/username_0/development/flutter/] git rev-parse --abbrev-ref --symbolic @{u} [ +44 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u} [ ] origin/stable [ ] executing: [/Users/username_0/development/flutter/] git rev-parse --abbrev-ref HEAD [ +13 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD [ ] stable [ ] executing: [/Users/username_0/development/flutter/] git ls-remote --get-url origin [ +15 ms] Exit code 0 from: git ls-remote --get-url origin [ ] https://github.com/flutter/flutter.git [ ] executing: [/Users/username_0/development/flutter/] git log -n 1 --pretty=format:%H [ +12 ms] Exit code 0 from: git log -n 1 --pretty=format:%H [ ] 8661d8aecd626f7f57ccbcb735553edc05a2e713 [ ] executing: [/Users/username_0/development/flutter/] git log -n 1 --pretty=format:%ar [ +14 ms] Exit code 0 from: git log -n 1 --pretty=format:%ar [ ] 5 weeks ago [ ] executing: [/Users/username_0/development/flutter/] git describe --match v*.*.* --first-parent --long --tags [ +16 ms] Exit code 0 from: git describe --match v*.*.* --first-parent --long --tags [ ] v1.2.1-0-g8661d8aec [ +271 ms] executing: /Users/username_0/Library/Android/sdk/platform-tools/adb devices -l [ +9 ms] Exit code 0 from: /Users/username_0/Library/Android/sdk/platform-tools/adb devices -l [ ] List of devices attached [ +5 ms] executing: idevice_id -h [ +38 ms] /usr/bin/xcrun simctl list --json devices [ +331 ms] Found plugin audioplayers at [Truncated] [✓] Android Studio (version 3.3) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 33.4.1 • Dart plugin version 182.5215 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) [✓] IntelliJ IDEA Community Edition (version 2018.3.3) • IntelliJ at /Applications/IntelliJ IDEA CE.app • Flutter plugin version 33.4.2 • Dart plugin version 183.5901 [✓] Connected device (1 available) • iPhone XR • 046D9841-5876-464A-8777-EF5349169AD0 • ios • com.apple.CoreSimulator.SimRuntime.iOS-12-1 (simulator) • No issues found! ``` Status: Issue closed Answers: username_1: Okay!
uic-utah/uic-attribute-rules
537272468
Title: MITRemActDate attribute rule error Question: username_0: This AR only works if the Remedial Action Date is before the MITDate. Remedial Action Date needs to be AFTER the MITDate. Brianna discovered this by entering an MITRemActDate before the MITDate and the AR accepted it. ![image](https://user-images.githubusercontent.com/4369080/70757700-cf649300-1cfd-11ea-8bd2-3f672dd8eff9.png) Answers: username_1: how did she enter it and what values did she use? for example, ```js const earliestDate = new Date('2019-12-18'); const mitremactdate = new Date('2018-12-02'); const mitdate = new Date('2019-11-10'); if (mitremactdate < earliestDate && mitremactdate > mitdate) { console.log('bingo') } else { console.log('wrong') } ``` gives the correct result which matches the existing ar rule. Now this sounds like what she did. Notice the `2018` year. This prints wrong. ```js const earliestDate = new Date('2019-12-18'); const mitremactdate = new Date('2018-12-02'); const mitdate = new Date('2019-11-10'); if (mitremactdate < earliestDate && mitremactdate > mitdate) { console.log('bingo') } else { console.log('wrong') } ``` This AR rule is only run on an update so was she able to enter this as an insert? username_2: Good Morning Steve, Candace retired yesterday, so I will try to answer your question about the above issue. I spoke with Brianna to get some more background information on this issue. When she enters a mitremactdate that is after the mitdate she gets the following: ![image](https://user-images.githubusercontent.com/49735267/71192655-ef3c0f80-2245-11ea-8059-bf1f8782e5fb.png) Conversely, when she enters a mitremactdate that is before the mitdate she gets the following: ![image](https://user-images.githubusercontent.com/49735267/71192678-fa8f3b00-2245-11ea-92ef-ab7e070040e9.png) The script error message appears to be correct, but the AR is triggering incorrectly. I am still trying to fully learn the processes/verbiage involved in this UIC database, so I apologize if my articulation of this issue is somewhat basic. Thank you for patience. Please let me know if you need any additional information. username_1: Thanks, let me take another look at it with your information! Status: Issue closed username_1: thanks @username_2! I updated your staging environment. Will you test and let me know when is a good time to promote that to production? username_2: Great thank you @username_1 I am out of the office on annual leave for the holidays through January 6th, but I will look into testing when I return. Thank you and Happy Holidays! username_2: Hi @username_1 Brianna helped me test the above in development and it looks like the attribute rule is triggering correctly, so we should be good to promote this to production. Is there anything that needs to be done before a change is promoted from development to production? For example, do I have to make sure that no one is actively using the database before the promotion can occur? username_1: I will need an exclusive lock to perform the migration. I do not think that versions need to be reconciled. username_2: Ok, do I need to do anything on my end to ensure you can get an exclusive lock (e.g., make sure no one else is actively using the database, etc)? username_1: Yeah, scheduling a time when no one is connected should be good. username_2: Ok, what is your schedule looking like for the rest of the week? Is there a time that would work for you? username_1: I am pretty free today through thursday to do this. username_2: Would Thursday at 1:30pm work for you (we could also do anytime after 1:30 pm as well if that would be better for you)? Also, about how long should we plan to stay out of the database while this promotion is in process? username_1: I updated the rules. You're all set. I hope you don't mind I didn't wait until thursday 🏁 username_2: Excellent! We don't mind at all, thanks @username_1 !
dokku/dokku
597651600
Title: ps:restart adds new LABEL layer to the existing image Question: username_0: ## Description of problem When using tag-based deploys (using an existing Docker image from Docker Hub), running `dokku ps:restart` always causes a new image to be created. This new image is based on the image used before the application restart, with one additional layer of `LABEL com.dokku.image-stage=release`. ### How reproducible It seems to be always reproducible on my production server, running Dokku 0.20.3. ### Steps to Reproduce 1. `dokku apps:create folding` 2. `docker image pull linuxserver/foldingathome:7.5.1-ls1` 3. `docker image tag linuxserver/foldingathome:7.5.1-ls1 dokku/folding:7.5.1` 4. `dokku tags:deploy folding 7.5.1` 5. `docker image ls | grep folding` to check the image hashes of the existing images; inspect the images to see the number of layers (I use Portainer for this 😅 But `docker image history <hash>` probably shows the same information) 6. `dokku ps:restart folding` 7. `docker image ls | grep folding` & `docker image history <hash>` to see the hashes changed, and additional layer(s) of `LABEL` #### Actual Results Anytime `dokku ps:restart` is run, the hash of `dokku/folding:7.5.1` is changed, and a new (duplicate) layer is added. #### Expected Results Don’t re-label existing image if the image with the `LABEL` layer already exists. Or even better yet, can Dokku avoid creating new Docker images with the `LABEL` layer altogether (pre-0.20 behaviour), and if needed, just label the created containers, not images? ## Environment Information ### `dokku report APP_NAME` output ``` -----> uname: Linux milanvit.net 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux -----> memory: total used free shared buff/cache available Mem: 64199 13300 4920 81 45978 51024 Swap: 10239 179 10060 -----> docker version: Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: <PASSWORD> Built: Wed Mar 11 01:25:46 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: <PASSWORD> Built: Wed Mar 11 01:24:19 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: <PASSWORD> runc: Version: 1.0.0-rc10 [Truncated] "Platform": "linux", "ProcessLabel": "", "ResolvConfPath": "/var/lib/docker/containers/e6de80bc3c5cb9238b2ffd06da128a984f730e6cc6d5018ae1c262df9fc775be/resolv.conf", "RestartCount": 0, "State": { "Dead": false, "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 15379, "Restarting": false, "Running": true, "StartedAt": "2020-04-10T01:15:44.18266916Z", "Status": "running" } } ] ``` Answers: username_1: Closing as there is a pull request open. Status: Issue closed
elixir-lsp/elixir-ls
854295950
Title: @spec Auto Suggestion Broken since somewhen around 0.6.2? Question: username_0: ### Environment * Elixir & Erlang versions (elixir --version): Erlang/OTP 23 [erts-11.2] [source] [64-bit] Elixir 1.11.4 (compiled with Erlang/OTP 21) * Operating system: Windows 10 /WSL Arch * Editor or IDE name (e.g. Emacs/VSCode): VSCode * Editor Plugin/LSP Client name: ElixirLS ### Troubleshooting - [X] Restart your editor (which will restart ElixirLS) sometimes fixes issues - [X] Stop your editor, remove the entire `.elixir_ls` directory, then restart your editor * NOTE: This will cause you to have to re-run the dialyzer build for your project If you're experiencing high CPU usage, it is most likely Dialyzer building the PLTs; after it's done the CPU usage should go back to normal. You could also disable Dialyzer in the settings. ### Logs 3. Check the output log by opening `View > Output` and selecting "ElixirLS" in the dropdown. Please include any output that looks relevant. (If ElixirLS isn't in the dropdown, the server failed to launch.) No Errors 4. Check the developer console by opening `Help > Toggle Developer Tools` and include any errors that look relevant. No Errors Installing 0.7.0 then repeatedly deleting .elixir_ls does not help installing 0.6.2 then repeatedly deleting .elixir-ls does help with 0.6.2 i get suggestions for the @spec , with 0.7.0 i dont 0.6.2: https://cdn.discordapp.com/attachments/269508806759809042/829735077779996712/unknown.png 0.7.0: https://media.discordapp.net/attachments/646832679999897610/829994787565731911/unknown.png Answers: username_1: @username_0 What do you mean by "@spec Auto Suggestion"? Do you mean Code Lens suggesting type specs for functions/macros (which I verified is working) or `@spec` attribute code completion (which is also working) or code completions inside typespecs (which is also working)? We need more info and a project that reproduces this issue. username_0: 0.6.2 Pic shows the suggestion I'm talking about, a gray @spec recommendation which u can click and automatically gets inserted into the code, I'm not the only one with the issue, it happens with any project I'm using Windowsz, but same issue with arch Linux on wsl2 For one guy it worked by repeatedly installing 0.7, but for me only after installing a 0.6 version @username_2 Meant "Hmm, that's likely a bug introduced since we generalized that portion of the code. Please file a bug report thumbsup" Other user having same issue: ![Screenshot_20210414_094413_com discord](https://user-images.githubusercontent.com/1122989/114673045-143f7600-9d06-11eb-891b-defcea0aaa79.jpg) Im using Using otp 23 and elixir 1.11 Same issue happens with otp24 rc2 and elixir 1.11 on arch Linux How can I provide u more Infos? https://gitlab.com/zen_core/zen_core Is one project where that issue happens But I also see it happening on empty elixir applications username_1: It's called Code Lens and the problem is on Windows only. On macOS it works fine. <img width="718" alt="Screenshot 2021-04-14 at 09 56 33" src="https://user-images.githubusercontent.com/1078186/114676213-603fea00-9d09-11eb-8e8c-2d68df41e6f3.png"> username_0: did post those 2 pics in the issue but its okay :) 0.6.2: https://cdn.discordapp.com/attachments/269508806759809042/829735077779996712/unknown.png 0.7.0: https://media.discordapp.net/attachments/646832679999897610/829994787565731911/unknown.png Status: Issue closed
igvteam/igv.js
218991452
Title: Tribble index - can only read full chromosome Question: username_0: Coming from the tail end of https://github.com/igvteam/igv.js/issues/320, creating a new ticket to maintain visibility into this. "The "tribble" index isn't fully implemented and can only read a whole chromosome" Leaving out a 'visibilityWindow' param on a bgzip+tabix bedgraph file causes the entire chromosome to be loaded. Setting a visibility window of NNN allows only that slice to be loaded. Answers: username_1: I think this was fixed long ago, if not re-open. Status: Issue closed
DevExpress/testcafe
462689900
Title: Support code steps in Raw API Question: username_0: <!-- If you have an idea you think might be useful for others, please share as much detail as possible in the sections below. Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists. This may save your time (and ours). --> ### What is your Test Scenario? <!-- Describe what you'd like to test. --> ### What are you suggesting? <!-- Describe the solution you'd like to propose and how it may help in your test scenario. --> ### What alternatives have you considered? <!-- Describe any alternative solutions or features you've considered if any. --> ### Additional context <!-- Add any other context or screenshots about the feature request here. --> Answers: username_1: Testing plan: 1. async/await 2. promises 3. require absolute/relative modules 4. error handling and error stack with correct lines/columns 5. shared `t.ctx` between code steps 6. all available global variables and functions are working (setTimeout, new Array, everything) 7. __dirname, __filename 8. Capability to execute Selector/ClientFunction inside a code step Status: Issue closed
phase-0/phase-0
333920595
Title: 3.2 Technical Blog Question: username_0: # 3.2 Technical Blog - [ ] Start your Toggl timer. **Write blog post** - [ ] Create a `username.github.io/blogs/t3-design-to-web-blog.html` file. **As if talking to a non-tech friend, discuss:** - [ ] What a responsive site is, and why responsiveness is important. - [ ] What mobile first design is, and why it's important. - [ ] What frameworks are, and their pros and cons. - [ ] What a wireframe is and why we use it. - [ ] Add and position your wireframe images from your `username.github.io/images/` directory. This will be a 'content' image, so add it as an HTML element. - [ ] The aspects of your wireframes you found difficult to implement, and why. **Link your blog to the main page** - [ ] On your `index` (home) page, create a link to your technical blog post. - [ ] Stage and commit with meaningful commit message. - [ ] Push to GitHub to make it live! - [ ] Paste a link to your live blog in the waffle ticket comments below. **Share it!** - [ ] On your campus-specifc Slack channel, share the link to your blog. Answers: username_1: https://username_1.github.io Status: Issue closed
jcabi/jcabi-xml
144405276
Title: XMLDocumentTest.parsesInMultipleThreads() fails randomly Question: username_0: The test fails because the final assertion is that all the started threads should end before timeout, which is 10 seconds. Possible solutions: 1) increase the time out (other tests in the same class and in ``XSLDocumentTest`` have the timeout 30 seconds). 2) remove the assertion since it's not so relevant to the test anyway (to ensure the test won't fail at random again) Answers: username_1: @yegor256 dispatch this issue please, see [par.21](http://at.teamed.io/policy.html#21) username_1: @username_0 thanks for tis bug, I topped your account for 15 mins, transaction `AP-6MC66331E4214470F`
PaddlePaddle/Paddle
1021621893
Title: 请教预测任务的学习率衰减策略 Question: username_0: 我在做回归任务,对能源信息进行预测,learing_rate取一个定值觉得不太好,想用一个学习率衰减方式,请问应该用哪个合适?求帮忙! https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Overview_cn.html#about-lr Answers: username_1: 你好,scheduler需要放在优化器中哈。 username_0: scheduler = paddle.optimizer.lr.NoamDecay(d_model=0.01, warmup_steps=100, verbose=True) 试了NoamDecay,d_model是什么意思
mumuy/mumuy.github.io
1135062996
Title: The use of a function Question: username_0: Hello, I'm learning your source code of <pacman>, but I have no idea about this part of code . I guess its use is to role the measure of canvas transformation, but there are so many strange points and I couldn't find relevant information. Maybe it's classic, could you give me some tips to get it ? [code positon in your github](https://github.com/mumuy/pacman/blob/master/game.js) ``` if (!Date.now) Date.now = function() { return new Date().getTime(); }; (function() { 'use strict'; var vendors = ['webkit', 'moz']; for (var i = 0; i < vendors.length && !window.requestAnimationFrame; ++i) { // window.requestAnimationFrame https://developer.mozilla.org/zh-CN/docs/Web/API/Window/requestAnimationFrame var vp = vendors[i]; window.requestAnimationFrame = window[vp+'RequestAnimationFrame']; window.cancelAnimationFrame = (window[vp+'CancelAnimationFrame'] || window[vp+'CancelRequestAnimationFrame']); } if (/iP(ad|hone|od).*OS 6/.test(window.navigator.userAgent) // iOS6 is buggy || !window.requestAnimationFrame || !window.cancelAnimationFrame) { var lastTime = 0; window.requestAnimationFrame = function(callback) { var now = Date.now(); var nextTime = Math.max(lastTime + 16, now); return setTimeout(function() { callback(lastTime = nextTime); }, nextTime - now); }; window.cancelAnimationFrame = clearTimeout; } }()); ``` (my email: <EMAIL>)
serilog/serilog
140253497
Title: Serilog and Silverlight Question: username_0: has anyone used Serilog with Silverlight (web apps I mean)? interested since I see there is a way to write from Serilog to Application Insights which doesn't have Silverlight support itself currently Answers: username_1: Hi George, Serilog's AI support is via the official client I think, so probably not an option anywhere AI doesn't itself support. I don't think Serilog 1.5 has any Silverlight support, but you may be able to rebuild from source to cover most features on that platform. Usage is very low these days so it's unlikely we'll target it in the future, sorry. Interested to hear how you go and if you hit any roadblocks let me know, happy to help wherever I can. Regards, Nick Status: Issue closed
rclone/rclone
619441807
Title: rclone purge deletes local $HOME if first arg is empty env var Question: username_0: <!-- Welcome :-) We understand you are having a problem with rclone; we want to help you with that! If you've just got a question or aren't sure if you've found a bug then please use the rclone forum: https://forum.rclone.org/ instead of filing an issue for a quick response. If you think you might have found a bug, please can you try to replicate it with the latest beta? https://beta.rclone.org/ If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-) Thank you The Rclone Developers --> #### What is the problem you are having with rclone? First things first: This is the result of a big user error & pure stupidity on my side. I figured I should still report it since it seems like a footgun. I can't quite wrap around why `rclone` would do what it did in this case. I tried running `rclone purge $VAR` where $VAR should have interpolated to `remote:path` but wasn't defined (due to a typo). Before I realized my error, large parts of my home directory were gone before I could stop the command. I only had a `b2` and `crypt` remote configured, no local remote or similiar. Looking at the docs for purge, it does not seem like this should happen. #### What is your rclone version (output from `rclone version`) v1.51.0 #### Which OS you are using and how many bits (eg Windows 7, 64 bit) Ubuntu 20.04 64-bit #### Which cloud storage system are you using? (eg Google Drive) B2, although it does not matter here #### The command you were trying to run (eg `rclone copy /tmp remote:tmp`) `rclone purge $ACCIDENTALLY_EMPTY_ENV_VAR` #### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`) Don't have any, don't want to nuke my home dir again Answers: username_1: That doesn't do anything for me when I try that. ``` felix@gemini:~$ rclone purge $BLAH -vv --dry-run 2020/05/16 07:45:37 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "purge" "-vv" "--dry-run"] Usage: rclone purge remote:path [flags] Flags: -h, --help help for purge Use "rclone [command] --help" for more information about a command. Use "rclone help flags" for to see the global flags. Use "rclone help backends" for a list of supported services. Command purge needs 1 arguments minimum: you provided 0 non flag arguments: [] ``` It just errors out. username_0: @username_1 Sorry, the command should read `rclone purge "$BLAH". `rclone purge $BLAH` doesn't do anything for me as well. username_0: These two outputs should illustrate the problem. Sorry for the mismatch in `rclone` versions, I just re-installed the OS :wink: With an existing env var: ``` felix ~ $ export FOO="foo:bar/" && rclone purge "$FOO" -vv --dry-run 2020/05/16 14:05:39 DEBUG : rclone: Version "v1.50.2" starting with parameters ["rclone" "purge" "foo:bar/" "-vv" "--dry-run"] 2020/05/16 14:05:39 NOTICE: Config file "/home/felix/.config/rclone/rclone.conf" not found - using defaults 2020/05/16 14:05:39 Failed to create file system for "foo:bar/": didn't find section in config file ``` Without an existing env var: ``` 2020/05/16 14:06:15 DEBUG : rclone: Version "v1.50.2" starting with parameters ["rclone" "purge" "" "-vv" "--dry-run"] 2020/05/16 14:06:15 NOTICE: Config file "/home/felix/.config/rclone/rclone.conf" not found - using defaults 2020/05/16 14:06:15 NOTICE: Local file system at /home/felix: Not purging as --dry-run set 2020/05/16 14:06:15 DEBUG : 3 go routines active 2020/05/16 14:06:15 DEBUG : rclone: Version "v1.50.2" finishing with parameters ["rclone" "purge" "" "-vv" "--dry-run"] ``` username_1: Yeah, in general the commands are you running in the local path if nothing is set. You can test with any command rclone ls as an example too. Generally, you'd want to programmatically add a check the environment variable is set to avoid things like this if you using them in code to be safe. What is it you want to happen? Confirmation before a delete? You can add your comments to this issue. https://github.com/rclone/rclone/issues/1574 Or are you expecting something else? username_0: @username_1 Thanks for clearing up why this happened. Is this mentioned anywhere in the docs? If so I must have missed it. The `purge` documentation only mentions `remote:path` which suggests that it would actually check whether I'm working against a configured remote. The documentation also mentions `source:sourcePath` in other cases in which case this behavior seems a bit misleading? I will add a comment to the issue that you linked, thanks a lot. Feel free to close this issue if you see this as expected behavior. username_1: In general any rclone command can be run against a local path, not sure it says that anywhere per se. What would you want it to say instead to make it more clear? It can be updated. username_0: Reading over the docs again, there are a few things to mention: 1. It is mentioned that a _remote_ can be a local path [here](https://rclone.org/docs/#syntax-of-remote-paths) and in detail [here](https://rclone.org/local/). The [purge](https://rclone.org/commands/rclone_purge/) command takes the argument `remote:path` so I guess this whole thing is mentioned in the docs in some way. 2. However, `rclone` lets you configure explicit _remotes_ with `config`. As a user, I now made the assumption that `rclone` would as least warn me when I try to `rm -rf` a directory that is not configured as an explicit remote. That is obviously not the case. Would it make sense to warn when purge is called with what amounts to an empty argument? I would argue that deleting the local $CWD with `rclone purge` is almost always user error and should involve a safeguard of some sort. username_2: The empty argument probably shouldn't happen. username_1: Can you explain what you mean @username_2 ? I'm not sure if you mean on the OS side or rclone should not. username_2: I just mean that rclone cmd "" probably should error out. username_1: Ok, that's where I thought you were going but confirmation is always good. I'll update the title of the issue as that does seem to make sense. That seems to the case for any command as it is repeatable for purge / ls / copy / etc username_3: I see - that is not good :-( It looks like the path parser is willing to accept "" as a synonym for the current directory. Internally rclone uses "" to mean the root of a transfer which is probably why, but it is no good for interactions with the user! I fixed this here if you want to have a go https://beta.rclone.org/v1.51.0-346-geb6e9b19-beta/ (uploaded in 15-30 mins) @username_0 very sorry for the trouble this caused you. username_0: Thank you very much for taking a look and fixing this quickly @username_3 and all. :+1: Happy to see that my sacrifice was not in vain :laughing: Status: Issue closed
Malinskiy/adam
1124981645
Title: AbstractMethodError exception thrown when executing adb command Question: username_0: **Describe the bug** Exception is thrown when trying to execute simple adb command: ``` 09:04:47.440 [vert.x-eventloop-thread-1] ERROR io.vertx.core.impl.ContextImpl - Unhandled exception java.lang.AbstractMethodError: Receiver class com.malinskiy.adam.transport.vertx.ChannelReadStream does not define or inherit an implementation of the resolved method 'abstract java.lang.Object trySend-JP2dKIU(java.lang.Object)' of interface kotlinx.coroutines.channels.SendChannel. at kotlinx.coroutines.channels.ChannelsKt__ChannelsKt.sendBlocking(Channels.kt:53) at kotlinx.coroutines.channels.ChannelsKt.sendBlocking(Unknown Source) at com.malinskiy.adam.transport.vertx.ChannelReadStream$subscribe$3.handle(VertxSocket.kt:241) at com.malinskiy.adam.transport.vertx.VariableSizeRecordParser.handleParsing(VariableSizeRecordParser.kt:94) at com.malinskiy.adam.transport.vertx.VariableSizeRecordParser.request(VariableSizeRecordParser.kt:68) at com.malinskiy.adam.transport.vertx.VertxSocket$readAvailable$2.handle(VertxSocket.kt:116) at com.malinskiy.adam.transport.vertx.VertxSocket$readAvailable$2.handle(VertxSocket.kt:49) at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96) at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:59) at io.vertx.core.impl.EventLoopContext.lambda$runOnContext$0(EventLoopContext.java:40) ``` **To Reproduce** ``` private fun run(): Int = runBlocking { StartAdbInteractor().execute() val adb = AndroidDebugBridgeClientFactory().build() adb.execute(GetAdbServerVersionRequest()) return@runBlocking 0 } ``` Answers: username_1: This looks like classpath issue. Can you verify that you’re using the expected version of dependencies including kotlin coroutines and vertx? username_0: Yup, I think I do - from `./gradlew dependencies`: ``` ... +--- com.malinskiy:adam:0.2.5 +--- org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.2 | \--- org.jetbrains.kotlinx:kotlinx-coroutines-core-jvm:1.5.2 | +--- org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.5.30 (*) | \--- org.jetbrains.kotlin:kotlin-stdlib-common:1.5.30 \--- io.vertx:vertx-core:4.1.5 ... ``` username_1: Try updating to the latest Adam btw, 0.4.3 has been released 3 months ago username_0: Oops, my bad, I just grabbed latest artifact from https://mvnrepository.com/artifact/com.malinskiy/adam without looking at dates/versions. Let me try newer versions now. username_1: The coordinates changed due to additional features. I didn’t want to include everything in one jar here. Here is the latest https://search.maven.org/artifact/com.malinskiy.adam/adam/0.4.3/jar Status: Issue closed
elersong/suppr
891468329
Title: User Stories 01 - Creating and listing reservations Question: username_0: In `back-end/src/errors/asyncErrorBoundary.js` ``` function asyncErrorBoundary(delegate, defaultStatus) { return (request, response, next) => { Promise.resolve() .then(() => delegate(request, response, next)) .catch((error = {}) => { const { status = defaultStatus, message = error } = error; next({ status, message, }); }); }; } module.exports = asyncErrorBoundary; ``` Use in controllers as part of `module.exports`. For example: ``` module.exports = { create: asyncErrorBoundary(create) } ``` Answers: username_0: Updated the column names in the database to a shorthand | **column_name** | **data_type** | | --------------- | ---| | first_name | string | | last_name | string | | mobile | string | | date | date | | time | time | | size | integer| username_0: Nevermind. The tests don't like my column names, so I'll be using the long versions. | column_name | data_type | |---|---| | first_name | string | | last_name | string | | mobile_number | string | | reservation_date | date | | reservation_time | time | | people | integer | username_0: API and end-to-end tests passing username_0: Successfully deployed Note to self: Don't forget to migrate and seed vercel `NODE_ENV=production npm run knex migrate:latest` `NODE_ENV=production npm run knex seed:run` And be sure to have a script for running knex commands in `package.json` `"knex": "knex` Status: Issue closed
kirkbushell/eloquence
163664011
Title: Sluggable on update Question: username_0: Hi. I check Sluggable trait but not fire sluggable methods on update. Can you update it to recreate slug if `slugStrategy` check different value? Thanks Answers: username_1: Hi there - I'm not sure what you mean. Can you elaborate? Status: Issue closed
alerta/alerta-contrib
983947295
Title: [Telegram plugin] Can't find mentioned Jinja2 template Question: username_0: **Issue Summary** In the Telegram plugin's `README.md` an example template is mentioned in [Explorer](http://explorer.alerta.io/#/send). Once on the site there doesn't seem to be any Jinja2 template. Answers: username_1: I couldn't find it either. However the MS Teams plugin has a little example of the template format. Should work if you create a file and point the telegram plugin to it. https://github.com/alerta/alerta-contrib/blob/5d79ea4bbda44bde0a578856fa465a89e64dee1d/plugins/msteams/README.md ```MS_TEAMS_SUMMARY_FMT = '<b>[{{ alert.status|capitalize }}]</b> [{{ alert.severity|upper }}] Event {{ alert.event }} on <b>{{ alert.resource }}</b><br>{{ alert.text }}' ``` username_0: Yes, I ended up using the [`DEFAULT_TMPL` variable](https://github.com/alerta/alerta-contrib/blob/40acf3b7d4658b68131a2fddc1478d3d3483f16b/plugins/telegram/alerta_telegram.py#L13-L22) 😃 Could be worth embedding it or change the link in the `README.md`. Status: Issue closed username_3: what are the possible filters we can use ?
crisneisantos/site-css
932099281
Title: Não tem necessidade desse id Question: username_0: https://github.com/username_1/site-css/blob/b1e8df5ae2216f9f178e02877c34a31544aa8be6/index.html#L14 Como vc não tem outros elementos nav, vc pode usar `<nav>` e no css ao invés de `#header {}` ser `nav {}`<issue_closed> Status: Issue closed
snarfed/bridgy
46633233
Title: re-fetch silo profiles periodically to pick up new homepage links Question: username_0: ...e.g. when users sign up for bridgy, then later add a link to their web site. this bit @dshanske recently: http://indiewebcamp.com/irc/2014-10-23#t1414072684397 Answers: username_0: this would also be useful for detecting when accounts go private/protected after signing up (#628). username_0: also, we could do this for Twitter and Instagram as part of poll, since we get the full Twitter user object in each tweet, and we'll get most of the Instagram user too when we switch to scraping (#603).
xing/hops
358042758
Title: Loading presets implicitly should not be possible Question: username_0: ## This is my intent (choose one) - [ ] I want to report a bug - [x] I want to request a feature or change - [ ] I want to provide or change a feature ## The problem Currently presets are loaded implicitly. While It's convenient it also introduces "magic" into the project and makes it hard to understand (without looking into the documentation) which presets are active (all presets which are installed via package.json) and which are not. This might be easy to reason about when you only work with hops. But usually in bigger projects there will be multiple tools (babel, eslint, etc) where you expect to find all module specific configuration in module specific configs. Having config as an opt-in only increases confusion. ## Proposed solution Enforce loading of presets via config file and not as an opt in in order to decrease confusion. Answers: username_1: @username_0 thank you for your input! I think you raise some very valid concerns here. The trade-off between explicitness and convenience is a very tricky one, though - and I would love to find out what @username_2 has to add to this discussion: it was the two of us who came up with the feature in the first place. username_2: @username_0 we built this feature inspired by [parcel](https://parceljs.org/plugins.html#using-plugins), which also automatically discovers and enables all installed parcel plugins; and we thought that this is a nice feature which reduces configuration, especially in simple projects (in some cases it might be required to manually list the presets, for example when presets depend on one another or require to be loaded in a specific order). But as we talked about this today in the office no one seems to have any particular opinion for keeping or removing this feature. /cc @username_3 @jhiode username_0: @username_2 thanks for the reply. I guess the implementation on parcel's side stems from its `no configuration` philosophy and kinda makes sense in its scope. I also understand how it fits into hop's philosophy. What differs is that with hops `Some presets require (or allow) additional configuration.`. That enables four ways to configure hops: 1) Implicitely load presets 2) Explicitely load presets 3) Implicitely load presets and configure them 4) Explicitely load presets and configure them This can potentially increase entropy and confusion imho. 🙈 Of course what option is being used depends heavily on the team. I personally prefer exactly one way 1) Explicitely load presets and configure them While I like the idea of no config the reality is that a lot of projects moved to simpler configs and it makes it easier to just look into a common location for all config options. I.e in the current project I am working on. ![screen shot 2018-09-11 at 10 53 17](https://user-images.githubusercontent.com/1830601/45349570-e7f17500-b5b1-11e8-8876-1b26955a7a9c.png) **Side note** Also it should be clarified what `hops.presets` actually does. Will it actually disable automatic package discovery or is it just for the sake of documentation? i.e. ``` BEFORE If you prefer, you can also explicitly list the presets that you want to use under the presets key in your application configuration. ===== AFTER If you prefer, you can also explicitly list the presets that you want to use under the presets key in your application configuration. This will disable automatic discovery of hops packages. ``` username_3: @username_0 I really your idea of making it more explicit in the docs. Would you like to open a PR, so you get the well-deserved contribution? :) Maybe you can replace `hops package` with `hops preset`, though. Status: Issue closed
WRSC/tracking
354338190
Title: Error generating coordinates JSON view Question: username_0: Accessing e.g. http://192.168.3.11/missions/2/latest_coordinates?limit=10 The error in the logs is: ``` ArgumentError (invalid strptime format - `%Y%m%d%H%M%S%z'): app/models/coordinate.rb:24:in `datetime_as_time' app/controllers/coordinates_controller.rb:88:in `block (2 levels) in latest_by_mission' app/controllers/coordinates_controller.rb:86:in `map' app/controllers/coordinates_controller.rb:86:in `block in latest_by_mission' app/controllers/coordinates_controller.rb:85:in `latest_by_mission' ``` Relevant line seems to be here. I'm not sure what's invalid about that line. Answers: username_0: My suspicion is that we're storing the datetimes as floats, so `"#{datetime}UTC"` is expanded to something like `"20180501104904.0UTC"` (note the decimal point). In the coordinate table view, the code to parse the datetimes looks like this: ```ruby Time.strptime(((coordinate.datetime.to_f).round).to_s+"UTC", "%Y%m%d%H%M%S%z") ``` I guess that `.round` will convert the float to an integer. The simple fix would be to copy this for the other datetime parsing code. Longer term we should presumably store datetimes as datetimes. username_1: fixed on master and deployed, right now. Status: Issue closed username_0: :+1: thanks!
aws/aws-cli
100091067
Title: doc request: expand put-log-events Question: username_0: Hello, In the documentation for `put-log events`, please mention that the parameter `timestamp` should be milliseconds since epoch (1970-01-01- 00:00:00 UTC). That tiny bit of information (i.e. milliseconds instead of seconds) can prevent lots of frustrations with 'too old' log message rejection. Thanks! Answers: username_1: Thanks for the feedback, we're looking into getting that updated. username_2: @username_0 - Thank you again for reporting this issue. Our documentation has been updated. https://docs.aws.amazon.com/cli/latest/reference/logs/put-log-events.html https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html
withfig/vscode-extension
1056086874
Title: Register an output channel and log to it rather than going via devtools Question: username_0: It's preferable extensions log via the output channel API since this message can clutter the console for extension/core devs: <img width="602" alt="Screen Shot 2021-11-17 at 4 42 24 am" src="https://user-images.githubusercontent.com/2193314/142202880-c04e6ab3-2390-43d9-b79c-316baa5b59f2.png">
runelite/runelite.net
517724314
Title: Add Farming Tracker to website Question: username_0: All farming-related data can be already easily read from config, so that part should be simple. The complicated part is actually parsing the data in similar way as client does it atm, what is very non-trivial and would probably require either rewriting big part of farming plugin on website or compiling it with for example GWT or something and then adding that to website.<issue_closed> Status: Issue closed
SAP/devops-cm-client
446312929
Title: Question: do you plan to add suport for ADT endpoint? Question: username_0: ADT provides Transport Management API too and it is enabled almost everywhere. I added some initial implementation at: https://github.com/username_0/sapcli/blob/master/doc/commands.md#change-transport-system-cts I planned to add more features in future, but if you push this project forward, I would have more time working on other things. Answers: username_1: Yes we have planned to look into it, but there is no decision taken so far. We are more SAP Cloud Platform driven as this cm_client is part of our [Project "Piper"](https://github.com/SAP/jenkins-library). The transport system is for us just another deploy target at the moment. username_2: Are there any follow-up questions or can we close this thread? Status: Issue closed username_0: No from my side. I just wanted to make you aware of my attempts in the field of controlling CTS via a tool usable from command line (e.g. in Jenkins).
pwm-project/pwm
281778793
Title: Email or SMS verification sent even if there is no email/mobile number Question: username_0: Scenario: Update Profile enabled Email address optional, mobile number required Both email verification and sms verification enabled Issue: If the user submit only the mobile number, email verification still fires, even if there is no email address defined in the form field. The update profile process remains blocked since the user is not able to get the code ![screen](https://user-images.githubusercontent.com/34513818/33945112-5e12ddec-e01e-11e7-91bd-2a1cf8bf8c47.PNG) Answers: username_0: The issue is present at least since the snapshot pwm-1.8.0-SNAPSHOT-2017-10-10T09_18_48Z-release-bundle and has been verified in the build pwm-1.8.0-SNAPSHOT-2017-12-08T21_27_12Z-release-bundle Status: Issue closed
gopherjs/gopherjs
248056058
Title: Detect when gopherjs binary Go version != GOROOT version, exit with helpful error. Question: username_0: It's possible to do the following set of steps: 1. Use version Go 1.N. 2. `go get -u github.com/gopherjs/gopherjs`. 3. Update Go to version 1.N+1. (Or downgrade to 1.N-1, or use any other version of Go.) 4. Run `gopherjs`. At step 4, you'll run into problems because the `gopherjs` binary was built for one version of Go, but you have a different version installed. We have a compile-time check for version mismatch, but not a runtime one. We should add one, to make the error message more helpful, instead of something unexpected like: ``` ../../../../../../../../usr/local/go/src/runtime/error.go:134:3: undeclared name: throw ../../../../../../../../usr/local/go/src/runtime/error.go:136:12: undeclared name: CallersFrames ../../../../../../../../usr/local/go/src/runtime/error.go:144:3: undeclared name: throw ../../../../../../../../usr/local/go/src/runtime/error.go:148:3: undeclared name: throw ../../../../../../../../usr/local/go/src/runtime/error.go:153:3: undeclared name: throw ../../../../../../../../usr/local/go/src/runtime/error.go:156:3: undeclared name: throw ``` This idea is inspired by #670. Answers: username_1: @username_2 If I'm not mistaken, this has been fixed for ages, no? Can this issue be closed? username_2: Yes, this ended up being a subset of #941 and got implemented in #966 in Feb 2020. Status: Issue closed
rancher/rancher
118506021
Title: service.upgrade doesn't update service.publicEndpoints correctly Question: username_0: Steps to reproduce: 1) create service with port 43 exposed. 2) Upgrade service to have new port, say 44. Bug: service.publicEndpoints has both 43,44 instead of having just 44. @ibuildthecloud @username_1 Answers: username_1: Tested with rancher-server version - v0.47.0-rc2 When service is upgraded and port is updated , service.publicEndpoint has only the updated port. Status: Issue closed
fiscalize-me/grupo-estrategico
226775522
Title: Objetivo. escrever na página Objetivo do wiki. colocar modelos de outras organizações. Answers: username_1: Atuar como organismo de apoio à comunidade para pesquisa, análise e divulgação de informações sobre o comportamento de entidades e órgãos públicos com relação à aplicação dos recursos, ao comportamento ético de seus funcionários e dirigentes, aos resultados gerados e à qualidade dos serviços prestados. [...] Possibilitar o exercício do direito de influenciar as políticas públicas que afetam a comunidade, conforme assegurado pelo Art. 1° da Constituição Federal de 1988: "todo poder emana do povo. (OSB, 2010a)
smallnest/rpcx
199319581
Title: Method to add field Question: username_0: Hello, I would like to know if there's a function to add field on request of client already here or is it possible to have this feature ? Because for now the call is exactly the same as rpc package => ``` { method: {method}, params: [ {args} ], id: {int} } ``` Thanks ! Answers: username_1: simply to say, No. Because we use the same requst to official rpc lib. But you can hack it as I do it to implement Auth plugin. Method is string type and you can marshal/unmarshal any types to set it as method field. Status: Issue closed
AdviceBot/AdviceBot
375042523
Title: Get data about Good and bad places in, how Question: username_0: API for place candidates https://maps.googleapis.com/maps/api/place/findplacefromtext/json?input=National%20museum%in%20Krakow%20&inputtype=textquery&fields=name,place_id&key= API for reviews text (only 5 reviews) https://maps.googleapis.com/maps/api/place/details/json?placeid=ChIJXagf6ApbFkcRWuithnXAZTk&fields=review&language=en&key= Status: Issue closed Answers: username_0: API for place candidates https://maps.googleapis.com/maps/api/place/findplacefromtext/json?input=National%20museum%in%20Krakow%20&inputtype=textquery&fields=name,place_id&key= API for reviews text (only 5 reviews) https://maps.googleapis.com/maps/api/place/details/json?placeid=ChIJXagf6ApbFkcRWuithnXAZTk&fields=review&language=en&key= Status: Issue closed
coleifer/peewee
330062095
Title: ManyToMayField doesn't create _through table Question: username_0: Hello, I am trying to implement a multiple choice of a class (Jeweler) for giving optional Style and Specialties choices. I did as follows: `import peewee db = peewee.SqliteDatabase('theone_ai.sqlite', check_same_thread=False) SPECIALTY_CHOICES = ( (0, 'Sur mesure'), (1, 'Reparations'), (2, 'Vente grandes marques'), (3, 'Mise au gout bijoux existants'), (4, 'Collections propres') ) STYLE_CHOICES = ( (0, 'Romantique'), (1, 'Contemporait'), (2, 'Glam rock'), (3, 'Art deco'), (4, 'Classique') ) class BaseModel(peewee.Model): class Meta: database = db class Styles(BaseModel): style = peewee.IntegerField(choices=STYLE_CHOICES) class Specialties(BaseModel): specialty = peewee.IntegerField(choices=SPECIALTY_CHOICES) class Jeweler(BaseModel): email = peewee.TextField() name = peewee.TextField() surname = peewee.TextField() phone = peewee.TextField() raison_sociale = peewee.TextField() num_immatriculation = peewee.TextField() adresse_siege = peewee.TextField() nom_commercial = peewee.TextField() styles = peewee.ManyToManyField(Styles, backref='jewelers') specialties = peewee.ManyToManyField(Specialties, backref='jewelers') ` I do create the tables but it doesn't create the _through ones, throwing peewee.OperationalError: no such table: jeweler_styles_through errors. Is there a way to force peewee to create the auto-through tables ? Answers: username_0: Ok ! I found the way: ``` db.create_tables([Jeweler, SalePoint, Styles, Specialties, Jeweler.styles.get_through_model(), Jeweler.specialties.get_through_model()]) ``` Now I'm trying to figure out how to populate a peewee object with data from a wtform ... username_1: wtform questions not relevant, but glad you got it sorted. Status: Issue closed
getgrav/grav-plugin-admin
285508910
Title: Unable to delete particles / menu disappears Question: username_0: Theme / Layout "Outlines - Position - Menu - About..." menu not visible / accessible. Unable to delete particles (once one or more have been added to a section. Answers: username_1: I think this is a Gantry5 issue, please refer to the Gantry5 repo for this: https://github.com/gantry/gantry5/issues Status: Issue closed
Alfresco/alfresco-js-api
334464113
Title: It is not possible to upload a content with node.js Question: username_0: <!-- PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER. REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT: Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-js-api --> **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [x] Bug - [ ] Support request ``` **Current behavior:** None of the examples provide a working solution for uploading with node.js. All APIs are browser-oriented and require the HTML5 FileApi objects that do not exist in node.js. **Expected behavior:** It should be possible to upload content from node.js. Examples should provide a working code that devs can paste and run. **Steps to reproduce the issue:** <!-- Describe the steps to reproduce the issue. --> **Node version (for build issues):** <!-- To check the version: node --version --> **New feature request:** <!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) --> Status: Issue closed Answers: username_1: There is an example in the integration folder
microsoft/PowerToys
699533275
Title: [Fancy Zones] FZ doesn't apply layout Question: username_0: <!-- **Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**. Instead, send dumps/traces to <EMAIL>, referencing this GitHub issue. --> ## ℹ Computer information - PowerToys version: 0.21.1 - PowerToy Utility: Fancy Zones - Running PowerToys as Admin: Yes - Windows build number: 2004 (19041.508) - NVidia RTX 2080 driving 2 x 4K DisplayPort displays at full 4K resolution ## 📝 Provide detailed reproduction steps (if any) 1. Installed 0.21.1 after manually uninstalling 0.19.x 2. Set up a custom layout consisting of 4 zones (2 equally sized zones on each display) 3. Hit Apply 4. Try moving a window (Holding SHIFT) - window goes semi-transparent, but no zones appear in the background 5. Hit WIN+` to open layout editor - editor opens and defaults to 'Focus' layout instead of the applied custom layout. The custom layout is available in the editor, but it's just not being applied when I hit the Apply button 6. Try applying a built-in layout (Focus or Columns) and hit Apply. Same results in Step 4 ### ✔️ Expected result Fancy Zones works as expected ### ❌ Actual result Fancy Zones does not appear to be working at all. ## 📷 Screenshots Answers: username_0: Further Information... It appears to be affected by the option to span displays. If that option is toggled at any point, Fancy Zones no longer functions. If you navigate to `C:\Users\<username>\AppData\Local\Microsoft\PowerToys\FancyZones` and delete the `.json` configuration files and set up new zones - the app returns to functional state (without the need to reboot/restart) username_1: @username_0 the problem is tracked in https://github.com/microsoft/PowerToys/issues/6302 The fix will be available in the next release. Status: Issue closed
ianyh/Amethyst
779149862
Title: Show space number on Amethyst menu bar icon Question: username_0: Don't get me wrong, the Amethyst menu bar icon is beautiful! But it could also be more useful! Imagine if it could track what space number you are in at any given time, and display the space number (optionally a space label too) next to it's logo. I don't want to use spacebar just for this... or bitbar. I feel like this feature should be baked in to give better visual queues. ![Screen Shot 2021-01-05 at 10 00 08](https://user-images.githubusercontent.com/47398571/103662141-96691300-4f3d-11eb-9a6c-d7e812dc4af4.png) Answers: username_1: I am also looking for a similar feature, currently, I am running the following for this solution, but ideally, it can be included in amethyst, or include in the 'Display Current Layout' function of Amethyst. The app I am using to show desktop is: https://github.com/gechr/WhichSpace
trustwallet/wallet-core
562129408
Title: Refactor TWStoredKey.cpp Question: username_0: File src/interface/TWStoredKey.cpp contains some business logic, while it should be a C wrapper only. Move logic to StoredKey.cpp. Optionally separate out file operations (import/export) to a different class, for cleaner design and better testability. Answers: username_0: Solved via #863 . Status: Issue closed
marcglasberg/async_redux
513728363
Title: How can we use redux persistence with async redux Answers: username_1: Your store optionally accepts a list of `stateObservers`, which can be used for persistence: var store = Store<AppState>( initialState: state, stateObservers: [MyPersistor()], ); class MyPersistor extends StateObserver<AppState> { AppState previousState; void observe( ReduxAction<St> action, AppState stateIni, AppState stateEnd, int dispatchCount, ) { // Compare stateEnd to previousState and persist the difference. persistDifference(previousState, stateEnd); previousState = stateEnd; } } At the moment you have to persist the state yourself, by creating the `MyPersistor` class above, and its method `persistDifference(previousState, stateEnd)`. Or, if you can wait, I believe I'll be adding out-of-the-box persisting capabilities to AsyncRedux within the next 3 months. I also believe that https://pub.dev/packages/redux_persist and https://pub.dev/packages/redux_persist_flutter can easily be adapted to work with AsyncRedux. Status: Issue closed username_1: New persistence feature: https://github.com/username_1/async_redux#persistence username_1: More persistence features: https://github.com/username_1/async_redux#saving-and-loading username_2: I have a question about the `LocalPersist` issue, here's an example what I met on theses days, I have a user list and I stored it on local though `LocalPersist`, but I want to load the user list when the app launch, how can I do that? And another question is that I tried to load the user list when the app init the state, but it didn't work. Do you have any best practices for this part. I'll be pleasure if you can help me with this, thanks. username_1: ```dart class Business { // static late Store<AppState> store; static late Persistor persistor; Future<void> init() async { User? firebaseUser = getFirebaseUser(); persistor = createPersistor(); AppState state; if (firebaseUser == null) { state = await resetState(); } else { AppState? readState = await persistor.readState(); var currentUser = readState?.currentUser; if (usuarioLogado == null) { readState = await resetState(); } // else { bool ifUserSeemsCorrect = (currentUser.uid == firebaseUser.uid); if (!ifUserSeemsCorrect) { readState = await resetState(); } } state = readState!; } // --- store = Store<AppState>( initialState: state, wrapError: wrapError(), persistor: persistor, ); turnOnFirebaseListeners(); } @protected WrapError? wrapError() => MyWrapError(); @protected User? getFirebaseUser() => ... @protected Persistor createPersistor() => MyPersistor(); /// Deletes all persistence, and saves an initialState. @protected Future<AppState> resetaState() async { await persistor.deleteState(); AppState state = AppState.initialState(); await persistor.saveInitialState(state); return state; } void turnOnFirebaseListeners() { if (CurrentUser.exists) { store.dispatch(TurnOnFirebaseListenersAction()); } } } ``` username_2: @username_1 Thank you very much, I will try this way on my flutter app.
projectcalico/calico
211461873
Title: Improve MD consistency Question: username_0: A lot of our docs are authored in subtly different ways. Would be good to go through the entire docs and provide a more common use of MD directives. For example: - Consistent use of numbered headings when describing a set of instructions - Remove the clickable links and replace with inline bold links at the start of the line (e.g. see k8s AWS install instructions) Answers: username_1: @username_0 this feels like a whole bunch of issues crammed into one. I think we should do a bunch of these, but it seems like it will be easier to split this work out and give the right bits the right priority if they were separate issues. Status: Issue closed username_0: I'm closing this as these are stylistic issues that will hopefully be resolved through our docs expert. username_0: A lot of our docs are authored in subtly different ways. Would be good to go through the entire docs and provide a more common use of MD directives. For example: - [ ] Consistent use of numbered headings when describing a set of instructions - [ ] Remove the clickable links and replace with inline bold links at the start of the line (e.g. see k8s AWS install instructions) - [x] calico/node environment table has extra column - [ ] Index for the calico integrations page has a bulleted list that duplicates the LHS menu - seems like a maintenance nightmare. - [ ] usage and reference index pages need beefing up with some decent text - [ ] mesos demo is actually an installation option (vagrant) and numbering is off: 1, 1.2, 3 - [ ] intro page is text heavy - [ ] Calico over ethernet fabrics page talks about the document being a tech note. - [ ] docker overview duplicates the side bar menu. should simplify - [ ] Open stack guide uses Part 0, Part 1 etc... just use normal section numbering - [ ] big headings in usage: external connectivity Status: Issue closed
silverstripe/silverstripe-framework
38304540
Title: Split mode with deleted page causes wrong redirect Question: username_0: If I navigate to a page in the backend that has been deleted and either split or preview mode is enabled, there will be a 403 forbidden followed by a redirect to a negative index (e.g.: admin/pages/edit/show/-3725489). Steps to reproduce - Silverstripe 3.1.5 - Create a page - Delete this page - Enable split or preview mode Answers: username_1: Duplicate of https://github.com/silverstripe/silverstripe-cms/issues/1014 Status: Issue closed
opendataio/hcfsfuse
1109251307
Title: Problems to Fuse a COS bucket Question: username_0: As the README.md said, after `mvn clean package` command, **hcfsfuse--jar-with-dependencies.jar** was generated in the folder target. However, when I try `$ java -jar target/hcfsfuse-1.0.0-SNAPSHOT-jar-with-dependencies.jar -c core-site.xml -c another-site.xml -m /Users/mbl/fusefs -r https://fuse-optimize-xxx.cos.ap-chengdu.myqcloud.com` , in which the url is the address of COS bucket. There occurs ERRORS: `[main] WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [main] INFO fuse.HCFSFuse (HCFSFuse.java:main) - mounting to /Users/mbl/fusefs [Thread-6] ERROR fuse.AlluxioFuseUtils (AlluxioFuseUtils.java:getIdInfo) - Failed to get id from with option -u [Thread-6] ERROR fuse.AlluxioFuseUtils (AlluxioFuseUtils.java:getGidFromGroupName) - Failed to get gid from group name .` while the process is **NOT** interrupted, the shell is stuck and finally the connection to server shutdown. How can I solve this problem? Is something wrong with the URL of my COS or the access authority need to be changed?
dart-lang/sdk
198083260
Title: VM crash if redirecting to a non existing constructor Question: username_0: The following example crashes on the VM: ```dart class B { B() : this.a(); // B.a(); } main() => new B(); ``` The code works fine if you insert the commented line. The crash stack trace is: ``` Dumping native stack trace for thread 71f9 [0x00000000005a22c7] dart::String::initializeHandle(dart::String*, dart::RawObject*) [0x00000000005a22c7] dart::String::initializeHandle(dart::String*, dart::RawObject*) [0x0000000000798246] dart::Function::UserVisibleName() const [0x0000000000811798] dart::Parser::ParseConstructorRedirection(dart::Class const&, dart::LocalVariable*) [0x0000000000824adb] dart::Parser::ParseConstructor(dart::Function const&) [0x00000000008220be] dart::Parser::ParseFunc(dart::Function const&, bool) [0x0000000000822de1] dart::Parser::ParseFunction(dart::ParsedFunction*) [0x000000000062e84a] dart::DartCompilationPipeline::ParseFunction(dart::ParsedFunction*) [0x0000000000634979] Unknown symbol [0x00000000006354d2] dart::Compiler::CompileFunction(dart::Thread*, dart::Function const&) [0x00000000006356e8] dart::DRT_CompileFunction(dart::NativeArguments) [0x00007ff50dd4861b] Unknown symbol [0x00007ff50dd48690] Unknown symbol [0x00007ff50c0f4a14] Unknown symbol [0x00007ff50c0f4854] Unknown symbol [0x00007ff50c0eef3d] Unknown symbol [0x00007ff50c0f3da4] Unknown symbol [0x00007ff50c0d97a0] Unknown symbol [0x00007ff50c0f3b27] Unknown symbol [0x00007ff50dd489d6] Unknown symbol [0x000000000064f3d8] dart::DartEntry::InvokeFunction(dart::Function const&, dart::Array const&, dart::Array const&) [0x0000000000652fcb] dart::DartLibraryCalls::HandleMessage(dart::Object const&, dart::Instance const&) [0x0000000000741eb8] dart::IsolateMessageHandler::HandleMessage(dart::Message*) [0x00000000007645bc] dart::MessageHandler::HandleMessages(dart::MonitorLocker*, bool, bool) [0x0000000000764acf] dart::MessageHandler::TaskCallback() -- End of DumpStackTrace Aborted (core dumped) ```<issue_closed> Status: Issue closed
dOpensource/dsiprouter
402300352
Title: Centos7 AMI Startup Issues Question: username_0: On centos 7 AMI build kamailio and rtpengine do not start on boot / AMI reboot. Branch: feature-ami Kamailio issue info: [root@ip-172-31-31-0 ~]# kamailio -c loading modules under config path: /usr/lib64/kamailio/modules/ 0(30147) WARNING: <core> [core/ppcfg.c:221]: pp_ifdef_level_check(): different number of preprocessor directives: N(#!IF[N]DEF) - N(#!ENDIF) = 1 0(30147) INFO: <core> [core/sctp_core.c:75]: sctp_core_check_support(): SCTP API not enabled - if you want to use it, load sctp module Listening on udp: 172.31.35.128:5060 advertise 172.16.17.32:5060 Aliases: udp: ip-172-31-35-128.us-east-2.compute.internal:5060 config file ok, exiting... [root@ip-172-31-31-0 ~]# systemctl start kamailio [root@ip-172-31-31-0 ~]# systemctl status kamailio ● kamailio.service - Kamailio (OpenSER) - the Open Source SIP Server Loaded: loaded (/usr/lib/systemd/system/kamailio.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Wed 2019-01-23 15:42:02 UTC; 4s ago Process: 30176 ExecStart=/usr/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f $CFGFILE -m $SHM_MEMORY -M $PKG_MEMORY (code=exited, status=255) Main PID: 30176 (code=exited, status=255) Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: kamailio.service: main process exited, code=exited, status=255/n/a Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: Unit kamailio.service entered failed state. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: kamailio.service failed. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: kamailio.service holdoff time over, scheduling restart. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: Stopped Kamailio (OpenSER) - the Open Source SIP Server. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: start request repeated too quickly for kamailio.service Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: Failed to start Kamailio (OpenSER) - the Open Source SIP Server. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: Unit kamailio.service entered failed state. Jan 23 15:42:02 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: kamailio.service failed. Rtpengine issue info: [root@ip-172-31-31-0 ~]# systemctl status ngcp-rtpengine-daemon.service ● ngcp-rtpengine-daemon.service - RTPEngine proxy for RTP and other media streams Loaded: loaded (/usr/lib/systemd/system/ngcp-rtpengine-daemon.service; enabled; vendor preset: disabled) Active: inactive (dead) since Tue 2019-01-22 17:22:53 UTC; 22h ago Process: 6873 ExecStopPost=/usr/sbin/rtpengine-stop-post (code=exited, status=0/SUCCESS) Process: 6847 ExecStart=/usr/sbin/rtpengine-start /etc/default/ngcp-rtpengine-daemon.conf (code=exited, status=0/SUCCESS) Main PID: 6847 (code=exited, status=0/SUCCESS) Jan 22 17:22:48 ip-172-31-31-0.us-east-2.compute.internal systemd[1]: Started RTPEngine proxy for RTP and other media streams. Jan 22 17:22:48 ip-172-31-31-0.us-east-2.compute.internal rtpengine[6849]: INFO: Generating new DTLS certificate Jan 22 17:22:48 ip-172-31-31-0.us-east-2.compute.internal rtpengine-start[6847]: [1548177768.647102] ERR: FAILED TO CREATE KERNEL TABLE 0 (No such file or directory), KERNEL FORWARDING DISABLED<issue_closed> Status: Issue closed
yakyak/hangupsjs
270952500
Title: Error with coffeescript syntax Question: username_0: Hi, When I `npm install hangupsjs`I get `node_modules/hangupsjs/src/auth.coffee:32:49: error: unexpected -> class AuthError extends Error then constructor: -> super ^^` If I install coffescript via brew or npm, I get the same error. I think this has something to do with ES6 syntax of coffescript. Does it support it ? Which version should I install ? Thank you all Answers: username_1: Hey , It is because of the coffeescript version use older version of coffee compiler v1.12.7 username_2: Drop coffee script and switch to es7... username_3: @username_2 and how can i do that when install? @username_1 i switch to version 1.12.7 and 1.9 but dons`t work. username_3: @username_0 u solve that? I face same problem here with node v10 and centos. username_4: ``` node_modules/hangupsjs/src/auth.coffee:34:49: error: unexpected -> class AuthError extends Error then constructor: -> super ^^ ``` this is not only old and deprecated, this is non-transpiled coffee-script code, I didn't get 1.12.7 to work :( username_5: Having this issue in Manjaro. ``` node_modules/hangupsjs/src/auth.coffee:34:49: error: unexpected -> class AuthError extends Error then constructor: -> super ```
cytoscape/cytoscape.js
343124015
Title: Detect graph cycle in Cytoscape.js Question: username_0: I want to ensure that my direct graph cy does not contain any cycle. There is an easier way to do that without implement the full algorithm? If not, what algorithm do you recommend? Thanks in advance.
ymcui/Chinese-ELECTRA
676947279
Title: error in loading checkpoints for pretraining Question: username_0: error in loading checkpoints for pretraining, adam_m is missing? ``` 2020-08-11 22:40:26.262591: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key discriminator_predictions/dense/bias/adam_m not found in checkpoint ERROR:tensorflow:Error recorded from training_loop: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error: Key discriminator_predictions/dense/bias/adam_m not found in checkpoint [[node save/RestoreV2 (defined at run_pretraining.py:363) ]] Original stack trace for 'save/RestoreV2': File "run_pretraining.py", line 404, in <module> main() File "run_pretraining.py", line 400, in main args.model_name, args.data_dir, **hparams)) File "run_pretraining.py", line 363, in train_or_eval max_steps=config.num_train_steps) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2871, in train saving_listeners=saving_listeners) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_model_default saving_listeners) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1480, in _train_with_estimator_spec log_step_count_steps=log_step_count_steps) as mon_sess: File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 584, in MonitoredTrainingSession stop_grace_period_secs=stop_grace_period_secs) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1007, in __init__ stop_grace_period_secs=stop_grace_period_secs) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 725, in __init__ self._sess = _RecoverableSession(self._coordinated_creator) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1200, in __init__ _WrappedSession.__init__(self, self._create_session()) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 1205, in _create_session return self._sess_creator.create_session() File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 871, in create_session self.tf_sess = self._session_creator.create_session() File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 638, in create_session self._scaffold.finalize() File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py", line 237, in finalize self._saver.build() File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 837, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 875, in _build build_restore=build_restore) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 502, in _build_internal restore_sequentially, reshape) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 381, in _AddShardedRestoreOps name="restore_shard")) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) [Truncated] build_restore=build_restore) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 502, in _build_internal restore_sequentially, reshape) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 381, in _AddShardedRestoreOps name="restore_shard")) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op op_def=op_def) File "/home/test/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__ self._traceback = tf_stack.extract_stack() ``` Answers: username_1: We did not put adma_* params in the released model to save disk space. Please check https://github.com/google-research/electra/issues/45 for more details. username_1: Closing since not updates. Status: Issue closed
heroku/heroku-buildpack-ruby
247866623
Title: Warn if the Ruby version isn't explicitly specified in Gemfile Question: username_0: The heroku CLI plugin for clearing the build cache currently special-cases the directories used by the Ruby buildpack for storing Ruby version, and preserves them whilst "clearing" the cache. Ideally the plugin wouldn't do this, and instead the Ruby buildpack would warn users if they do not explicitly specify a Ruby version in their Gemfile, to avoid problems when they use review apps or pipelines. For more info, see heroku/heroku-repo#70. Answers: username_1: We do warn if Ruby version isn't explicitly set. We also provide a fallback, and to make that fallback not randomly break when we upgrade ruby versions we must record that fallback version somewhere permanent. For now that place is the poorly named "cache". The cache is for all purposes a durable store that can be cleared/cleaned. We've re-purposed this folder to be an actual durable store by bypassing the ability to clean or clear it. This is intentional. Status: Issue closed
ether/etherpad-lite
104695235
Title: AttributeManager.removeAttributeOnLine not working properly when text style is already applied Question: username_0: To reproduce: * create pad with 2+ lines; * indent all lines; * apply BOLD to all lines; * outdent all lines; Result: first line is ok; other lines have an extra "*" on the beginning of the line. Example: https://beta.etherpad.org/p/lfp_removeAttributeOnLine It seems that the issue is on `AttributeManager.removeAttributeOnLine`: when line has BOLD on it (or any other attribute that is not applied to the whole line), it is not removing the "*" used as marker for the line. __This seems to be breaking all plugins that use `setAttributeOnLine()`, like ep_headings2.__ Possible solution: create a new hook for plugins to provide which lineAttributes they would use on `setAttributeOnLine()`, so when we check if line still has line attributes, we only check for those that can be used like that.<issue_closed> Status: Issue closed
apache/trafficserver
413985860
Title: Multiple PATH_CHALLENGE frames are sent on one packet Question: username_0: This bug was introduced with 6a057e7d81. Because `QUICPathValidator::will_generate_frame` is called multiple times for a packet, QUICPathValidator generates frames for all challenges at once. Status: Issue closed Answers: username_0: This bug was introduced with 6a057e7d81. Because `QUICPathValidator::will_generate_frame` is called multiple times for a packet, QUICPathValidator generates frames for all challenges at once. Status: Issue closed username_0: Fixed. username_0: Fixed. Status: Issue closed
videojs/video.js
122342873
Title: 5.4.5 release proposal Question: username_0: * [ ] This adds touch and move listeners to the parent of volumeBar. #2638 * [ ] Width of buttons is 4em. So, left:4em; #2913 * [ ] updated sandbox to to use newer/working CDN #2917 * [ ] Children are an array and no longer an object #2908 * [ ] Example fix #2915 Answers: username_0: https://github.com/videojs/video.js/releases/tag/v5.4.5 and as `@next` on npm. username_0: 5.4.5 as been superceded by 5.4.6 which is now on the CDN and released as stable. Status: Issue closed
emberjs/emberjs-build
52415487
Title: add yuidoc docs Answers: username_1: @username_0 is this still relevant? and if so: what exactly needs to be done? /cc @locks username_2: I'm sorry we didn't get back to this previously, but at this point this repo is unused by Ember and unmaintained. Closing... Status: Issue closed
appirio-tech/connect-app
179873249
Title: No confirmation message shown on uploading new requirements Question: username_0: ### Expected behavior Should show confirmation message ### Actual behavior No confirmation message shown on uploading new requirements ### Steps to reproduce the problem -Open the URL https://connectv2.topcoder-dev.com/projects Login as user Create new project Add requirements Upload files click done ### Screenshot/screencast ![3](https://cloud.githubusercontent.com/assets/7139655/18930049/252053e2-85e4-11e6-9d5e-c0c2ce8ae443.jpg) -- #### Environment - OS: Win 7 - Browser (w/version): Chrome 52 - User role (client, copilot or manager): User - Account used: christina_uw<issue_closed> Status: Issue closed
reduxjs/redux-toolkit
968228230
Title: Code splitting requires `tagTypes` to be defined at the parent level `createApi` Question: username_0: Hi Team, **Actual:** Code splitting requires the `tagTypes` to be defined at the parent level itself ie. `createApi` **Expected:** While doing code-splitting, it should be a total separation of concerns and there should be a way to define `tagType` from the partial endpoints configuration itself. **CodeSandbox:** https://codesandbox.io/s/rtk-code-splitting-7dpin?file=/src/app/services/posts.ts Not sure, if we can currently define `tagType` while injecting endpoints as i didn't found it in the docs. Let me know if any further details are required. Thanks, Manish Answers: username_1: You can use `enhanceEndpoints` before `injectEndpoints` to add new tag types on the go username_0: @username_1 Cool, thanks. We are good to close this issue. Status: Issue closed
PrismarineJS/node-minecraft-protocol
347043602
Title: Parse / deserialize individual packets Question: username_0: I'm working on a proxy which means I only listed for `raw` events on the nmp server so I can then send them to my real server. The only problem that I'm having is that sometimes I might want to read what's inside of a packet but I only have a buffer available, some packets are easy to read (chat for example) but others just throw a bunch of random symbols when I try to read them. Here's an example of the `player_info` buffer that I'm trying to read (in hex) `2e0001e60a5aaebbfa3f5aa173820e0391d4db086c6c75697363616200000000` Trying to read this as a normal string outputs `.æ Z®»ú?Z¡s‚‘ÔÛusername_0` Does NMP expose any function that I could call to manually parse a buffer? Answers: username_1: I'm just guessing but you could try something like this. ```js client.deserializer.parsePacketBuffer(buffer) ``` https://github.com/ProtoDef-io/node-protodef/blob/HEAD/doc/api.md#parserprotomaintype https://github.com/PrismarineJS/node-minecraft-protocol/blob/master/src/client.js#L38 https://github.com/PrismarineJS/node-minecraft-protocol/blob/master/src/transforms/serializer.js#L29-L31 username_2: If all you want is to deserialize an individual (raw) packet, you might be better served using protodef directly. Which is what the above snippet does. username_0: @username_1 reply works perfect. Thanks for your help! Status: Issue closed username_0: I'm working on a proxy which means I only listed for `raw` events on the nmp server so I can then send them to my real server. The only problem that I'm having is that sometimes I might want to read what's inside of a packet but I only have a buffer available, some packets are easy to read (chat for example) but others just throw a bunch of random symbols when I try to read them. Here's an example of the `player_info` buffer that I'm trying to read (in hex) `2e0001e60a5aaebbfa3f5aa173820e0391d4db086c6c75697363616200000000` Trying to read this as a normal string outputs `.æ Z®»ú?Z¡s‚‘ÔÛusername_0` Does NMP expose any function that I could call to manually parse a buffer? username_0: I'm not really sure this is working correctly, I'm getting some very strange outputs. This is the parsed buffer from a `set_slot` package ```{ size: 10, name: 'set_slot', state: 'play' } { data: { name: 'steer_vehicle', params: { sideways: 1.3633232759416145e-41, forward: 2.369355800876173e-38, jump: 0 } }, metadata: { size: 10 }, buffer: <Buffer 16 00 00 26 01 01 01 00 00 00> } ``` generated by this code `console.log(metadata, client.deserializer.parsePacketBuffer(buffer)) ` Status: Issue closed
tcptomato/ROad-Block
678286904
Title: ads various romanian sites Question: username_0: ``` ||replicaonline.ro/banner/$image sfatulmedicului.ro###reclame ||agrointel.ro/*.gif$image ||crestinortodox.ro/img/pub/$image wowbiz.ro##.adocean ||opiniatimisoarei.ro/*.gif$image opiniatimisoarei.ro##.hiddenad ||opiniatimisoarei.ro/*banner$image opiniatimisoarei.ro##[href="http://euro-instal.ro"] opiniatimisoarei.ro##[href="https://certificatenergetictimis.ro/"] opiniatimisoarei.ro##[href="http://toronto-residence.ro/contact/"] ziuaconstanta.ro###optional_banner ||ziuaconstanta.ro/images/banners/$image ``` Answers: username_1: Merged. Multumesc. Status: Issue closed
netbox-community/netbox
692406325
Title: dumpdata broken looking for table extras_script Question: username_0: When trying to use the dumpdata command to get a current dump of the database I get the following error: ``` Traceback (most recent call last): File "/opt/netbox/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) psycopg2.errors.UndefinedTable: relation "extras_script" does not exist LINE 1: ...OR WITH HOLD FOR SELECT "extras_script"."id" FROM "extras_sc... ^ ``` For the full traceback see https://pastebin.com/pjk7zFh2 <!-- NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED. This form is only for reproducible bugs. If you need assistance with NetBox installation, or if you have a general question, DO NOT open an issue. Instead, post to our mailing list: https://groups.google.com/forum/#!forum/netbox-discuss Please describe the environment in which you are running NetBox. Be sure that you are running an unmodified instance of the latest stable release before submitting a bug report, and that any plugins have been disabled. --> ### Environment * Python version: 3.8.2 * NetBox version: 2.9.2 <!-- Describe in detail the exact steps that someone else can take to reproduce this bug using the current stable release of NetBox. Begin with the creation of any necessary database objects and call out every operation being performed explicitly. If reporting a bug in the REST API, be sure to reconstruct the raw HTTP request(s) being made: Don't rely on a client library such as pynetbox. --> ### Steps to Reproduce 1. Install Netbox (or use docker-compose from the netbox-docker project), I followed the docs and got to the point at which I could log in to the UI to test. It was not necessary to create any objects in the DB. 2. Run: ``` python manage.py dumpdata --format yaml --traceback ``` <!-- What did you expect to happen? --> ### Expected Behavior Data to be output to stdout in yaml format. NOTE: without the `--format yaml` argument, json is output to stdout however the error still occurs. <!-- What happened instead? --> ### Observed Behavior For the full traceback see https://pastebin.com/pjk7zFh2 Answers: username_1: As a workaround, you can exclude these models when running `dumpdata`: ``` python manage.py dumpdata --format yaml --traceback -o netbox.yaml --exclude extras.Script --exclude extras.Report ``` username_0: @username_1 thanks, confirmed the workaround username_1: Short of hijacking Django's `dumpdata` management command, I don't think we can do much about this. The Report and Script models are designated as "unmanaged" because they don't actually exist in the database but still need to generate content types and permissions. AFAIK changing them to proxy or abstract won't work for this, and Django has [decided not to exclude unmanaged models](https://code.djangoproject.com/ticket/13816) when dumping data. I'm going to close this out as it seems the workaround mentioned above should suffice. Status: Issue closed
rectorphp/rector
971331727
Title: Incorrect behavior of ManualJsonStringToJsonEncodeArrayRector Question: username_0: # Bug Report <!-- First, thank you for reporting a bug. That takes time and we appreciate that! --> | Subject | Details | | :------------- | :--------------------| | Rector version | last dev-master | | Installed as | composer dependency | Rule `ManualJsonStringToJsonEncodeArrayRector` always convert manual JSON string to \Nette\Utils\Json::encode array even if the class 'Nette\Utils\Json' not found. ## Minimal PHP Code Causing Issue See https://getrector.org/demo/1ebfe3fd-7e9c-614c-9bb2-274ba6035927 ```php <?php final class DemoFile { public function run() { $someJsonAsString = '{"role_name":"admin","numberz":{"id":"10"}}'; } } ``` ### Responsible rules * `ManualJsonStringToJsonEncodeArrayRector` ## Expected Behavior Any of the following options: - Convert manual JSON string to json_encode array and let `JsonDecodeEncodeToNetteUtilsJsonDecodeEncodeRector` in [rector-nette](https://github.com/rectorphp/rector-nette) handle the rest. - Convert manual JSON string to json_encode array if the class 'Nette\Utils\Json' not found. Answers: username_1: This would be ideal in this case IMO :+1: Could you propose a PR to change `JsonDecodeEncodeToNetteUtilsJsonDecodeEncodeRector`? username_0: I tried to fix this issue at rectorphp/rector-src#699, but how can I test it when the class `Nette\Utils\Json` exists? Status: Issue closed
codestates/ds-TIL
699016012
Title: [TIL] 이선주_200911 Question: username_0: :warning: 해당 템플릿은 기본 예시입니다. 수강생 여러분의 방식에 맞춰 **자유롭게** 작성해주시길 바랍니다 :warning: :no_entry_sign: **과제 솔루션 공유 절대 금지** :no_entry_sign: - **키워드**: `배운 내용에 해당하는 키워드를 2-3개 정도 적습니다 (예: Python, pandas, 시각화, etc.)` --> matplotlib, figure - **배운 것**: `어떤 내용을 배웠는지 간략하게 적습니다` --> matplotlib를 이용해 그래프 그리기 - **어려웠던 부분**: `배운 내용 중에서 어렵거나 헷갈렸었던 부분을 작성합니다` --> 글로 설명하고 정리하려니 시간이 아주 많이 걸린다 - **더 알고 싶은 부분 / 공부하고 싶은 부분**: `더 알고 싶거나 공부할 필요가 있는 부분에 대해 작성합니다` --> 마무리 하고 seaborn에 대해서도 정리하고 시각화 할 때 두 가지를 잘 활용할 수 있으면 좋겠다 - **느낀 점**: `오늘 하루에 대한 감상, 기분 등을 간단하게 작성합니다` --> 블로깅은 시간이 참 많이 든다 + 정성 Answers: username_1: 시각화의 달인이 되실 그 날을 응원합니다
sdi-sweden/geodataportalen
1050684649
Title: (1559, 'Ny anv, kn') Question: username_0: **2014-03-27T11:43:21.000+00:00** ****: Förnamn : Eva Efternamn : Malmberg E-postadress : <EMAIL> Telefon : 0413-28488 Mobil : 0709538120 Org.nr/Personnr: 2120001116
guruahn/vue-google-oauth2
617106261
Title: Error sign in on instagram viewer Question: username_0: I'm not able to sign in on Instagram viewer. FB viewer and another apps allow but Instagram doesn't. Answers: username_0: Even front demo https://stupefied-darwin-da9533.netlify.app/ doesn't work. username_1: @username_0 Thank you for the issue. I'll check it. Status: Issue closed username_1: I haven't solved it yet This is a very old issue, so I will close it.
insomniacslk/dhcp
376598895
Title: DHCPV4: Missing some useful New*FromRequest server functions Question: username_0: I am writing an example of dhcpv4 server for that library based on krolaw implementation and there are missing several base functions like: ``` NewOfferFromRequest NewAckFromRequest ``` Actually I made them on top of the basic NewReplyFromRequest, but believe they should be inside of the library. Do you have a plan to add them? Would you like to get a pull request? P.S. There are missing NewOffer, NewAcknowledge function also mention in comments. But i don't care now about client. Answers: username_1: Hey @username_0, thanks for the suggestion! A server implementation is coming shortly, I have a work-in-progress pull request waiting on some other stuff before being merged. That includes also some of the functions you are suggesting, but I will add only the ones that are strictly necessary for the server code. Of course we can add more of them later on. For now just stay tuned, I plan to merge the v4 server around next week username_2: Wouldn't `NewOfferFromRequest` and `NewAckFromRequest` be the same as `NewReplyFromRequest` plus setting the message type? I don't see much added value in having those functions. username_1: I believe that at the time we opened this issue we didn't have `NewReplyFromRequest`, I'm closing this Status: Issue closed username_2: We did have it, it is mentioned in the issue description. username_1: oh, good point. Either way we don't need those functions, thanks for noticing! username_0: In order to create acknowledge from NewOfferFromRequest we have to set 2 fields, 3 Options and one Option list according to parameter OptionParameterRequestList It looks like this: ``` r, err := dhcpv4.NewReplyFromRequest(p) r.SetYourIPAddr(ip) r.SetServerIPAddr(h.ip) r.AddOption(&dhcpv4.OptMessageType{MessageType: dhcpv4.MessageTypeAck}) ... 3 more lines of code ``` Packet structure is incomplete after NewReplyFromRequest(p). It should be assigned extra settings to create a valid DHCP packet. The order of settings is important. User should care enough about byte handling. The Idea is to provide in library higher level function which can deliver complete packet. So user can focus on server logic rather that byte handling. In the same time there is a client function NewRequestFromOffer This function includes option settings and makes completed packet for DHCP client reply, so user of the library doesn't care about byte encoding in client library. I am just interesting about the reason of discrimination server library. username_2: `NewReplyFromRequest` is intended to be used as a base function when implementing a server handler. How the `Offer` and `Ack` looks like may change according to the server implemenation. A pull request is always welcomed. How would you suggest `NewOfferFromRequest` and `NewAckFromRequest` should look like? Will they be very different from each other? username_1: I agree that it would be nice if server implementations could focus on logic rather than packet, even if I don't see it as a blocker. Details may still differ, and even the server implementation cannot ignore wire-level details. If we have to implement wrappers, this should probably go one layer up, in a sub-module. However, regarding server implementation, stay tuned, as I have been working on CoreDHCP, a server implementation based on this library username_0: @username_2 **NewOfferFromRequest** only differ from **NewAckFromRequest** by the message type It its possible to have only single function like NewPacketFromRequest(type MessageType, ...) But in this case it should handle all possible types of reply including OFFER, ACK, NACK.. etc It is just a matter of operational excellence of API structure. For my personal opinion to have 3-4 simpler methods with name exactly describing its destiny is better than one universal complicated function. I also agree with @username_1 that it may be in a level up wrapper, which may be used in user defined implementation of server handler. Anyway I suppose you will come back to that decision during work on upcoming server implementation on next abstraction level. So no need to have that functions in the library right now. username_1: I am writing an example of dhcpv4 server for that library based on krolaw implementation and there are missing several base functions like: ``` NewOfferFromRequest NewAckFromRequest ``` Actually I made them on top of the basic NewReplyFromRequest, but believe they should be inside of the library. Do you have a plan to add them? Would you like to get a pull request? P.S. There are missing NewOffer, NewAcknowledge function also mention in comments. But i don't care now about client. username_1: At this point I wonder if server and client should be separate packages from the packet parsing, and contain all the low and high level wrappers. I am reopening this issue because there's still room for an interesting discussion
tskit-dev/tsdate
1159891601
Title: Set node metadata in json format Question: username_0: Node(id=37, flags=0, time=33273.69140479153, population=-1, individual=-1, metadata=b'{"mn": 33273.69140479153, "vr": 12870629.731027713}') ``` We should probably attempt to set the "mn" and "vr" keys assuming the schema is valid, and simply omit them (perhaps with a warning) if it is not. Perhaps we should also store this as `{"tsdate_time":{"mn":XXX,"vr":YYY}}` so that it's clear what the metadata refers to? If no schema exists, and the node metadata is entirely empty, we can probably set the nodes table schema to `tskit.MetadataSchema.permissive_json()` I wonder if @username_2, the metadata king, has any thoughts on the best thing to do here. Answers: username_0: I just remembered https://github.com/tskit-dev/tsinfer/issues/416, in which the metadata schema is set for the node table for tree sequences produced by tsinfer. I noted there that if, for efficiency, we have a tsinfer-specific struct schema for node metadata, we should probably check for it in tsdate and amend the schema to add the necessary fields that tsdate wants to set. username_1: Just to note here that we're supposed to be working with things that **arent'** tsinfer too, so we need to be very careful about assumptions we make about metadata. username_2: Thanks for flagging this up @username_0 - can I clarify that the aim here is to add tsdate specific attributes to the possibly-preexisting metadata? If so this would seem to be a great use case to consider as we add the higher level metadata API. As I won't get to that till this round of C work in tskit is done, maybe we should write some strawman metadata updating code here as first stab at what the metadata API will eventually do, then replace it when that is ready. Basically something like "If there is an updateable schema, either stuct or json then add in the new properties, if not assign the minimal schema for the tsdate properties". username_0: For a struct type, I presume we could either (a) check if it matches against an existing type, and if so, create a derived struct type with extra fields, dump the existing data and save it back as the new type, or (b) take the existing schema and simply add some new fields onto the end. I don't know if (b) is a bit hairy, though? username_2: The order of fields in struct is determined by the schema, either alphabetically (by default) or specified by the `index` key. So if there is a performance reason it is possible just to append bytes without it being hairy. username_0: Thanks @username_2. I guess it might be safer to save the metadata out into python dict, then put it back in again? Something like this (which I presume will work for both JSON and struct metadata): ``` if "tsdate" not in tables.nodes.metadata_schema.schema["properties"]: # add the required tsdate properties to the node metadata schema tsdate_schema = tables.nodes.metadata_schema.schema.copy() tsdate_schema["properties"]["tsdate"]={ "type": "object", "default": {"mean": float("NaN"), "variance": float("NaN")}, "properties": { "mean": { "description": "The mean time of this node, calculated from the tsdate posterior probabilities. " "This may not be the same as the node time, as it is not constrained by parent-child order.", "type": "number", "binaryFormat": "d", }, "variance": { "description": "The variance in times of this node, calculated from the tsdate posterior probabilities", "type": "number", "binaryFormat": "d", } }, } meta = [r.metadata for r in tables.nodes] # Clear all metadata before changing the schema - is there a function to do this? tables.nodes.packset_metadata([b''] * tables.nodes.num_rows) tables.nodes.metadata_schema = tskit.MetadataSchema(tsdate_schema) # Put the node metadata back for i, (r, md) in enumerate(zip(tables.nodes, meta)): tables.nodes[i] = r.replace(metadata=md) ``` username_0: NB: if we are going with the idea of using struct metadata, as in https://github.com/tskit-dev/tsinfer/issues/416#issuecomment-1060923190, then we can do as above and add the `tsdate` metadata in a property called `tsdate` without taking up space on each node, which is nice and clean, I think. We can also fully spell out "mean" and "variance". username_2: Code above looks like the slow-but-correct way to do this, and as we will be replacing this by the eventual high-level API I think that is fine.
quentin7b/android-location-tracker
126918710
Title: Callback for "not enabled" errors Question: username_0: Provide a callback for this problems: - GPS_PROVIDER is not enabled - NETWORK_PROVIDER is not enabled - PASSIVE_PROVIDER is not enabled Answers: username_1: I'm looking for the best way to do that. For now I have 2 options: * 3 methods more in the `LocationTracker`, `onGpsProviderError()`, `onNetworkProviderError()` and `onPassiveProviderError()` **might be overriden** but **not abstract**, one for each provider error * A single method like `onProviderError(ProviderError)` with a specific throwable for each provider or something. As a user, could you tell me what would be the best ? Thanks, username_0: I guess the most common thing to do when there's a *not enabled* error is to render a message for the user, letting him know that you're having problems trying to retrieve his location. So, I would prefer to use single method `onProviderError(ProviderError)` to handle the 3 errors. username_1: See #13, if it's ok, then I'll close Status: Issue closed
One-com/node-oconf
59395065
Title: Add support to resolve multiple configuration files from the oconf binary Question: username_0: As @username_1 suggested in #14. What would the semantics be of that? **first.cjson:** ```json { "foo": "bar" } ``` **second.cjson:** ```json { "bar": "qux" } ``` ``` $ oconf first.cjson second.cjson { "foo": "bar" "bar": "qux" } ``` That scenario is straight forward. But what happens if first.cjson includes third.cjson that also defines a value for "bar"? I guess the question is; In which order should files included in the command line args be loaded, compared to the files included in one of those files? Answers: username_0: Given the following files: ``` first.cjson: { "#include": "third.cjson", "foo": "bar" } second.cjson: { "bar": "qux" } third.cjson: { "bar": "baz" } ``` What would you expect the following invocation to output? ``` $ oconf first.cjson second.cjson ``` If we resolve each file independently and then merge them extend them backwards we would get: ```json { "foo": "bar" "bar": "qux" } ``` As first.cjson is resolved to the value `"baz"` at the key `bar` and then extended by second.cjson that would then overwrite the value to `"qux"`. That is the easiest solution, both to implemenent, understand and communicate. But is it what you want to happen? I don't really have a use case for this myself, so I'd need your feedback on this @username_1. username_1: Admittedly, I haven't really had the need either, but here's my take; $ oconf 1.cjson 2.cjson 3.cjson is equivalent to loading a single file with: { "#include": ["1.cjson", "2.cjson", "3.cjson"] } username_0: That seems in line with what I am suggesting above, although your explanation is a thousand times simpler :-) username_0: I've done some hacking around to take a stab at this, plus some other features that I have in mind, that requires a bit more advanced internals any way. I've found a potential problem for the course of action that we have agreed on here. What would happen in the following case: ``` first.cjson: [1,2,3] second.cjson: [2,3,4] $ oconf first.cjson second.cjson ``` I do not see the use case for a root level list in oconf, but we have a test that makes sure that it works. I guess that the expected behaviour would be concatting them, so that you would get: `[1,2,3,2,3,4]`...? username_2: Without having seen this issue, I just landed this commit on a feature branch, which touches upon it: https://github.com/One-com/node-oconf/commit/ff164c8ddea28ca51bfd01a06a98f2fb48d0be60 It may be doing the opposite of what's suggested here though.. Have ported it from another config loader which is mounted on top of oconf in our stack. username_0: So passing the entire array here, instead of the first item, would fix this: https://github.com/One-com/node-oconf/blob/master/bin/oconf#L106 What was the relevance of that addition to the feature branch? username_2: @username_0 Looks like it yes. The only relevance with that feature branch is that I wanted to make sure with a unit test that it does the right thing when used like this, also with regards to the new `#public` behaviour (which it does). username_2: @username_0 could you review the mentioned commit ^? Status: Issue closed username_2: Closed via 40b8df27b2fa43188ca57657119593995112f406 username_0: @username_2 If you make it a major version bump: LGTM username_0: Just noticed that you did... :-) username_2: :) yep
SashaShirokov/face_detection
600773175
Title: DB storage for known persons' images Question: username_0: We need to implement db, in order to store images of known persons. I suppose schema could be like that - singe `person` table with `image`, `name`, `encoding` columns. Answers: username_1: All right, will do that. I'm going to use Flask-SQLAlchemy.
facebook/react
481769358
Title: New React Developer Tools does not clearly indicate empty object or array Question: username_0: **Do you want to request a *feature* or report a *bug*?** Bug/unexpected behavior. **What is the current behavior?** When an object or array is empty, there's no array to expand and see that it's empty, nor is there an `(empty)` indication. Initially, I was concerned that I couldn't expand any object or array from the new React DevTools due to this. ![Screen Shot 2019-08-16 at 3 11 35 PM](https://user-images.githubusercontent.com/11951801/63195539-7aa75900-c038-11e9-95fe-4754f7d14693.png) **What is the expected behavior?** I would expect to either be able to expand the empty object, or to see `(empty)` next to the non-expandable object. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** Chrome version: 76.0.3809.100 (Official Build) (64-bit) React Developer Tools Version: 4.0.2 (8/15/2019) [Reference discussion on Twitter](https://twitter.com/username_0/status/1162441422496325633) Answers: username_1: The lack of an arrow to expand was meant to _indicate that it's empty_ 😁 This mimics the browser's Elements panel for HTML elements that don't have children. I'll give it another pass and see if some explicit "empty" label or styling seems better. Status: Issue closed username_1: Fixed in https://github.com/username_1/react-devtools-experimental/commit/69b2ecc531f24f6e0e0dbbb26844b060d581926b ![Screen Shot 2019-08-16 at 1 43 04 PM](https://user-images.githubusercontent.com/29597/63197320-4ded4480-c02c-11e9-9ad7-8d0f58e6831f.png) Will release in v4.0.3 sometime this afternoon. username_2: Kind of a meta comment — but we should probably align object display with how DevTools displays JavaScript objects in the console. Rather than with Elements. Even slight mismatches with object display can be jarring because the muscle memory is so strong. username_1: v4.0.3 has just been released with this fix. Full changelog: https://github.com/facebook/react/blob/master/packages/react-devtools/CHANGELOG.md#403-august-17-2019
MicrosoftDocs/cpp-docs
648596829
Title: About value of LONG_MAX and LONG_MIN Question: username_0: I feel that the information in the document seems to be incorrect, please check it again. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 262486b8-5d80-2c0b-f2eb-40c86aaf6a61 * Version Independent ID: 26b45954-2bcc-263f-7522-0c72941141bb * Content: [Integer Limits](https://docs.microsoft.com/en-us/cpp/cpp/integer-limits?view=vs-2019) * Content Source: [docs/cpp/integer-limits.md](https://github.com/Microsoft/cpp-docs/blob/master/docs/cpp/integer-limits.md) * Product: **visual-cpp** * Technology: **cpp-language** * GitHub Login: @corob-msft * Microsoft Alias: **corob** Status: Issue closed Answers: username_0: .
python-rapidjson/python-rapidjson
420462719
Title: Memory leaking in load Question: username_0: I see constantly growing memory with `rapidjson.load` comparing to standard `json.load` I unpacked about 10k gzipped json files like this ``` with open(os.path.join(path, file_name), 'rb') as f: content = f.read() with BytesIO(contents) as file_object: with gzip.GzipFile(fileobj=file_object, filename='objects.json') as archive: return rapidjson.load(archive) ``` Memory consumption of rapidjson ![image](https://user-images.githubusercontent.com/14128194/54277368-b5e0c000-45a0-11e9-8f7f-fe540e3137d9.png) With json lib memory is almost constant. Answers: username_1: Do you mean repeatedly executing that code? username_0: @username_1 yep, somethink like `for file_name in os.listdir(path): ...`. added to issue username_1: Thank you, I will try to discover the problem. Any chance you can share the gzipped file? username_0: I can't share exactly my files, because it consists sesnsitive data. But I think you can use any arbitrary json in gz. for example https://github.com/DanielRosenwasser/angular2-data-table/blob/master/assets/data/100k.json My set of files is about 300mb of gzips. Each file contains single json 100-500kb (unpacked). ~11k files in total. username_1: No problem, will use that. username_1: I have the impression that the leak happens in the `load()` variant, and not in `loads()`, at least that's what my simple tests against that 100k.json show. Could you do a simple test to confirm that, by doing `rapidjson.loads(archive.read())` above? username_1: I added a simple failing test, but I still need to understand what's going on with the `PyReadStreamWrapper` class that seems the culprit... I added raw `cout` debugging logs that tell me its destructor gets properly called... I'll dig further... username_1: As I explained on the `rapidjson` issue, I think that the culprit is *not* the stream wrapper, but rather a deeper problem in how the reading handler uses (or rather, *ignores*) the `copy` argument passed to some of its methods. I will need some help here, as I could not understand the logic. Maybe @kenrobbins or other C++ experts could chime in and shed some light? BTW, the *failing test* I introduced in c5cec20 is *not* enough to replicate the problem :confused: ... username_1: @username_0, could you test the following patch and tell me if it makes any difference? ```diff diff --git a/rapidjson.cpp b/rapidjson.cpp index 4daed1b..6eacbd5 100644 --- a/rapidjson.cpp +++ b/rapidjson.cpp @@ -590,6 +590,12 @@ struct PyHandler { } ~PyHandler() { + while (!stack.empty()) { + const HandlerContext& ctx = stack.back(); + if (ctx.copiedKey) + free((void*) ctx.key); + stack.pop_back(); + } Py_CLEAR(decoderStartObject); Py_CLEAR(decoderEndObject); Py_CLEAR(decoderEndArray); ``` username_1: Just a gentle ping... I'm having problems replicating the exact failure you noticed, sorry. username_0: @username_1 I've tried your test, it can't replicate problem. I've tried to write test ourself, but it also can't replicate, even like this with open('test.gz', 'wb') as f: with gzip.GzipFile(fileobj=f, filename='test.json', mode='wb') as archive: archive.write(('[' + ','.join('{"foo": "bar"}' for _ in range(10000)) + ']').encode()) for file_name in range(10000): with open('test.gz', 'rb') as f: content = f.read() with io.BytesIO(content) as file_object: with gzip.GzipFile(fileobj=file_object, filename='objects.json') as archive: rapidjson.load(archive) I think key here is the number of files, because such test still repicates the problem path = "C:\\lots of json gziped files\\" for file_name in os.listdir(path): with open(os.path.join(path, file_name), 'rb') as f: content = f.read() with io.BytesIO(content) as file_object: with gzip.GzipFile(fileobj=file_object, filename='objects.json') as archive: rapidjson.load(archive) Could you send compiled rapidjson with patch? username_1: ``` At that point, I would try installing the patch mentioned above with ``` (rj) $ patch <<EOF diff --git a/rapidjson.cpp b/rapidjson.cpp index 4daed1b..6eacbd5 100644 --- a/rapidjson.cpp +++ b/rapidjson.cpp @@ -590,6 +590,12 @@ struct PyHandler { } ~PyHandler() { + while (!stack.empty()) { + const HandlerContext& ctx = stack.back(); + if (ctx.copiedKey) + free((void*) ctx.key); + stack.pop_back(); + } Py_CLEAR(decoderStartObject); Py_CLEAR(decoderEndObject); Py_CLEAR(decoderEndArray); EOF patching file rapidjson.cpp ``` and then you can recompile it with ``` (rj) $ python setup.py build_ext --inplace running build_ext building 'rapidjson' extension x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall... ``` and try running your tests using this virtualenv. Let me know if this works, otherwise I will send you a compiled version for GNU/Linux 64bit. username_1: I released v0.7.1, that includes the mentioned [patch](0f99e62b5e63d04f736cefcdfd4c91ceca112d64). Since I cannot be sure that fixes the issue, I won't close this ticket for a while, hopefully you can give some feedback. username_0: v0.7.1 still leaking. I've tried your json, it's too small to reproduce problem. So, here self-contained test which clearly reproduces the problem ``` import rapidjson import gzip import io import requests def test_leak3(): r = requests.get('https://raw.githubusercontent.com/DanielRosenwasser/angular2-data-table/master/assets/data/100k.json') with open('test.gz', 'wb') as f: with gzip.GzipFile(fileobj=f, filename='test.json', mode='wb') as archive: archive.write(r.text.encode()) for i in range(10000): with open('test.gz', 'rb') as f: content = f.read() with io.BytesIO(content) as file_object: with gzip.GzipFile(fileobj=file_object, filename='test.json') as archive: rapidjson.load(archive) ``` username_1: I hope I found the glitch (see [this commit](https://github.com/python-rapidjson/python-rapidjson/commit/276a8f08fd0f3fe89c6ca550f81edf8b88a6b10f)), and I released v0.7.2. With it, executing your script above I see a stable memory usage, at last! Can you please confirm that's the case? Status: Issue closed username_0: Confirm 0.7.2 is fixed. Thank you! ![image](https://user-images.githubusercontent.com/14128194/59193462-74a38080-8b8e-11e9-9a51-bdf92936939d.png) username_1: Great, thank you, last but not least for your patience... it took a while to figure out! username_0: Would be good to add test from https://github.com/python-rapidjson/python-rapidjson/issues/117#issuecomment-492293341 to suite, maybe username_1: Right, the commit above lets the tracemalloc machinery to catch the error.
tzihiang/RiotAPI
665519951
Title: Create GlobalStatistics Question: username_0: - Focus on creating Global statistics: (Restricted to a particular tier in a region first): - Challenger, NA1 - Find the win rate of specific champions played in that tier and region - (v1.2) In the future: Expand tiers covered, followed by region
gradle/gradle
437193411
Title: Remove overhead added by `*Scope` classes in Kotlin DSL via Kotlin 1.3 inline data classes Question: username_0: According to [this blog post](https://www.pacoworks.com/2018/04/09/arrow-0-7-a-quality-of-life-upgrade/) Kotlin 1.3 will introduce inline data classes. Inline data classes would allow us to remove the overhead from types such as `DependencyHandlerScope`, `NamedDomainObjectContainerScope`, etc whose only purpose is to wrap a Gradle type in order to provide a Kotlin-friendly API. Answers: username_1: Still would be nice to have this performance optimization.
scottcorgan/nash
94354323
Title: Resolve thunks and promises to address sync vs async code Question: username_0: The handler is assumed to be synchronous unless it returns a promise or a function. If it returns a promise or a function, it will be assumed to be async and will wait accordingly. This will alleviate the requirement that everything in Nash be async. Most of the time async doesn't make sense fro the command line. Sync ```js var cli = require('nash')(); cli.command('command').handler(function (data, flags) { // Do some stuff }); ``` Async (thunkish) ```js var cli = require('nash')(); cli.command('command').handler(function (data, flags) { return function (done) { // Do some async stuff done(); } }); ``` Async (promises) ```js var cli = require('nash')(); cli.command('command').handler(function (data, flags) { return somePromise.then(function () { }); }); ``` Status: Issue closed Answers: username_0: Will be a plugin in https://github.com/cmd-js/cmd
department-of-veterans-affairs/va.gov-team
1141520864
Title: [Assistive tech and device support] Alert and status messages aren't announced without receiving focus. (11.22.1) Question: username_0: ### General Information #### VFS team name #### VFS product name eGain #### Point of Contact/Reviewers <NAME> (@briandeconinck) - Accessibility *For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website. --- ### Platform Issue Alert and status messages aren't announced without receiving focus. ### Issue Details Error messages for the initial Name/Your Question form do not follow standard patterns. The messages are not programmatically associated with their inputs (eg. using `aria-describedby`), and are announced by screen readers when focus leaves an input only. The inputs themselves are not properly marked as required (visibly prior to user input, or programmatically with a `required` or `aria-required` attribute) and do not programmatically indicate when the input or lack of input is invalid (using `aria-invalid`). If a user inadvertently fails to enter something into one of the inputs, the "Start Chat" button simply doesn't work, with no visible or programmatic indicator that the button is disabled, no error message, and no way to return focus to the input that needs to be reviewed. ### Link, screenshot or steps to recreate ### VA.gov Experience Standard [Category Number 11, Issue Number 22](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html) ### Other References WCAG SC 4.1.3_AA --- ### Platform Recommendation Follow established best practices for input validation and error messages. This includes: Visually and programmatically indicate required fields prior to a user entering data. When error messages are generated, associate them with their input programmatically using `aria-describedby`. When an input or lack of input is not valid, indicate that programmatically using `aria-invalid`. When the user attempts to press "Start Chat" with invalid data, provide an error message that receives keyboard focus. ### VFS Team Tasks to Complete - [ ] Comment on the ticket if there are questions or concerns - [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket.
fiuba-labo-de-micro-miercoles/2019-2c-primer-proyecto-ivanrubin10
679804639
Title: TPO6 Subido Question: username_0: Hola Fabricio, yo me basé en lo que dijo Fernando en clase. Es decir, que si se utiliza polling para la detección de los botones, ya se están evitando los posibles efectos de rebote porque se está revisando constantemente el estado del puerto. Entiendo que si hay un efecto de rebote y hay una detección errónea, en la siguiente lectura del puerto se anularía este efecto. Y como esto es casi inmediato no se percibe el efecto en los LEDs. Answers: username_1: Estimado Ivan, en lineas generales el proyecto esta bien. No concuerdo con que no hace falta controlar el rebote de los botones. cada cuanto estas muestreando los botones? Por lo que veo bastante rapido. Creo que lo que esta pasando es que cuando aprietes o sueltes botones vas a tener una oscilacion en los valores que lees por lo que el prescaler va a oscilar tambien. Que opinas? Saludos! username_0: Hola Fabricio, yo me basé en lo que dijo Fernando en clase. Es decir, que si se utiliza polling para la detección de los botones, ya se están evitando los posibles efectos de rebote porque se está revisando constantemente el estado del puerto. Entiendo que si hay un efecto de rebote y hay una detección errónea, en la siguiente lectura del puerto se anularía este efecto. Y como esto es casi inmediato no se percibe el efecto en los LEDs. username_1: mmm, ya entiendo. En este caso es verdad que se va a estabilizar en el valor correcto de prescaler, pero la configuración va a tener una oscilación innecesaria a pesar de que sea imperceptible. Te diría que como debounce sencillo no actualices el valor del prescaler hasta no tener una lectura estable de los botones. Para hacerlo de una manera facil, podes pedir que haya N lecturas iguales antes de tocar la configuración. Saludos! Fabri username_0: Fabricio, ahí agregue un delay después de seleccionar el parpadeo para que espere un tiempo más para hacer la lectura de nuevo. Actualice el informe, el código y el diagrama de flujo. Ya subí todo. Saludos! username_1: Hola Ivan! Veo que agregaste el delay pero no veo que valides la entrada antes de accionar sobre el prescaler. Lo logica deberia ser: mido1, espero, mido2, verifico que mido1 es igual a mido2, actualizo el preescaleer o vuelvo a mido1. Saludos! Fabricio username_1: Hola Ivan! Avisame si le haces las correcciones asi ya cerramos la materia. Saludos! username_0: Hola Fabricio! Disculpa la demora, hasta ayer estuve preparando un final. Ahí hice lo que me pediste y subí los archivos de nuevo. Saludos! Ivan Status: Issue closed