content
stringlengths
194
506k
The entire list of firewall rules that need to be configured to support every permutation of Cloud Foundation on VxRail is extensive, and is out of scope for this guide. As part of delivery engagement, Dell professional services will work with a customer’s network administrators to identify all firewall rules that need to be configured before starting a Cloud Foundation on VxRail deployment. Depending on your company’s security policies, if a firewall or firewall rules are in place between Cloud Foundation on VxRail VLANs (for example, between the management network of the Management Domain and a VI Workload Domain), then an extensive list of ports must be opened. You can research the list at https://ports.vmware.com/home. For simplicity’s sake, an any-any trust rule between any of these pairs of subnets is the most practical option. The following basic firewall rules must be in place:
Tech moves fast! Stay ahead of the curve with Techopedia! Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia. Filters are application programs used in a firewall for examining packets on their arrival at the firewall. Filters help with firewall security in that they route or reject the packets based on defined rules. Filters can be configured as per user and can be used to perform specific sets of actions, including on packets of a particular protocol family. Filters make use of the source IP address of the packet, destination IP address, IP protocol ID, TCP/UDP port number, ICMP message type and fragmentation flags in most cases to decide the course of actions for the packets. In fact, the key parts of the packets are compared against the rules and database of trusted information, to evaluate the course of action. Those that pass the test are allowed to move, whereas those that fail are rejected and denied any further service. In order to protect against denial of service attacks and floods, filters can be used for limiting the traffic rate of packets destined for the routing engine. On the basis of source, protocol and application, filters can restrict traffic for the routing engine. Filters can also be configured to address special circumstances such as ones associated with fragmented packets. There are many advantages associated with filters. Filters enable the control mechanism for the packets in transition, provide a mechanism of protection for the router from heavy traffic and external incidents.
EDR / MDRIdentify, contain, respond, and stop malicious activity on endpoints SIEMCentralize threat visibility and analysis, backed by cutting-edge threat intelligence Risk Assessment & Dark Web MonitoringIdentify and quantify unknown cyber risks and vulnerabilities Cloud App SecurityMonitor and manage security risk for SaaS apps SOC ServicesProvide 24/7 threat monitoring and response backed by ConnectWise SOC experts Policy ManagementCreate, deploy, and manage client security policies and profiles Incident Response ServiceOn-tap cyber experts to address critical security incidents Cybersecurity GlossaryGuide to the most common, important terms in the industry Azure AD: The Blind Spot in Your Data Protection Plan Azure Active Directory is a great addition to have in your toolkit, but if you’re not backing it up, it can be compromised and made unusable by issues such as cybersecurity threats, corruption caused by synchronization errors, accidental data deletions, and malicious insider activity. These issues come with great risk and cost—your users lose access to platforms that are required for work, and you have to spend a large amount of time rebuilding your Azure AD. This can affect your reputation as an MSP. Watch our webinar to get answers to these questions: - What would happen if Azure AD was suddenly not available? - How would you manage losing your internal directory and the source of your SSO logins across multiple tools? - How long would it take you to recover a lost Azure AD? - And more
Support tools for the validation of filters. To the left of each sentence is shown the target interaction (either from gold standard or derived by the system). Green means that the interaction detected by the system matches an interaction in the gold standard. Gold marks an interaction in the gold standard not detected by the system. Red denotes an interaction detected by the system, but not contained in the gold standard. In other words, true positives are in green, false positives are in red, and false negatives are in gold. Rinaldi et al. Genome Biology 2008 9(Suppl 2):S13 doi:10.1186/gb-2008-9-s2-s13
What it's for - Avoid losing rankings by finding security issues commonly penalized by search engines - Minimize the risk of fraud through attacks using leaked user data and stolen login credentials - Check pages for unsecure elements and prevent security warnings from being shown and impacting revenue - Identify pages where security measures could be improved by implementing browser security policies Website security standards are being enforced by modern browser nowadays If your siteops department is slacking on these topics, the intensified browser enforcement could seriously harm your business – e.g. if the browser omnibox shows your ecommerce shop as unsecure. Website security analysis Unsafe resources & mixed content HTTP to HTTPs migration Identify URLs that have not been properly switched from HTTP to HTTPs and therefore still pose a security risk. Discover all URLs that send cookies over an insecure connection or miss the secure flag, and can probably be stolen by an attacker. Discover all pages that contain forms that could leak data through an unsafe HTTP connection or expose data through GET parameters. Strict transport security Identify all URLs without a strict transport security HTTP header, which enforces HTTPS for subsequent request, or specify a duration too short for HSTS preload. Content security policy Discover all pages that do not specify a content security policy HTTP header and therefore use the default policy of the browser, which is less strict.
While using Time Machine on a Macintosh you find a lot of files fill up the internal hard drive space and cannot be deleted. The same can happen from a USB drive being ejected before DLP is able to scan the files. DLP needs to process the files being copied. If DLP does not have time to process, DLP copies the files to this frbackup folder for process. System Integrity Properties (SIP) may be turned on. Time Machine, external hard drive, or a USB drive may be in use. There are two possible methods to resolve this issue. The first is to shutdown the agent and attempt to delete the files. If the first option doesn't work then uninstall the agent to delete the files. This method is used if doing method 1 is unable to allow the files to be deleted.
Applications with business-critical information can be tested thoroughly through a source code audit. This will reveal vulnerabilities that are difficult to find in black-box or grey-box penetration tests. Therefore, when compared to automated tools, our experts can do this complicated task better. Source code review can identify vulnerabilities in a function of your web pages. Some of vulnerabilities occur by a developer lacking secure coding knowledge or by mistakes, such as business logic, hard code sensitive data, or even developer’s backdoors. Just only penetration testing alone could not discover any additional application vulnerabilities relating to the developed code after the application do security source code review. Automated tools can be used to perform large code scan and detect some issues, but it cannot understand the context of the application, which is a critical part of security source code review for the business. For an effective source code review result, it needs to be verified by an expert every single result to determine if there is a blind spot which automated tools cannot check.
Twitter-based information security pundit ‘the gruqp’ recently tweeted the idea that ransomware authors, AKA criminals, are “doing more to advance the state of cyber-security readiness than the last 10 RSA conferences.” Now, it’s very difficult to measure the effectiveness of a conference from that angle, given the amount of variables and the impossibility of measuring and comparing the cyber-security readiness of attendees companies before and after the conference. But it’s very difficult to deny that the WannaCry and NotPetya/ExPetr have not advanced the dangers ransomware poses to business and even pushing it further into the boardroom agenda. It’s been a widely reported that shipping giant A.P. Moller-Maersk was affected by NotPetya so badly that the firm was forced to communicate via Whatsapp, and had seen losses of around $300 million USD. That alone, should have been a very difficult conversation for the Maersk board of directors. So what are the universal lessons learnt from the NotPetya ransomware attack? Having software fully patched with the latest updates from the software manufacturer will go a long way to reduce a networks attack surface. In the case of NotPetya, several samples had been collected of the malware propagating via PDF and Word attachments. Legitimate methods are being used to gain entry, and as a result are managing to go undetected. ExPetr is shown to have used two Windows tools such as Windows Management Instrumentation Command-line (WMIC) and PsExec. Credential abuse should be high up on the priority list as malware has started to sniff passwords. ExPetr is said to have used the Mimikatz toolset to obtain user login credentials in plain text. This includes local admin accounts and domain users across networks. Software updating capabilities are being taken over by malware to help it spread. Microsoft says that ExPetr got into the self-update function of M.E.Doc tax accounting software, this is widely used in the Ukraine and was also a country that was particularly hit-hard by ExPetr. A frequently-tested and often used backup and recovery solution for all business systems and data should do most of the leg work in fighting against ransomware and other malware attacks, whether it’s WannaCry, ExPetr or otherwise.
What is Security as Code? Download What is Security as Code? Security as Code is the methodology of codifying security and policy decisions and socializing them with other teams. Security testing and scans are implemented into your CI/CD pipeline to automatically and continuously detect vulnerabilities and security bugs. Access policy decisions are codified into source code allowing everyone across the organization to see exactly who has access to what resources. Adopting Security as Code tightly couples application development with security management, while simultaneously allowing your developers to focus on core features and functionality, and simplifying configuration and authorization management for security teams. This improves collaboration between Development and Security teams and helps nurture a culture of security across the organization. Implementing Security as Code Security as Code generally comes in three different forms: security testing, vulnerability scanning and access policies. Each of these enable your Engineering teams to understand and fix security issues early on in development as opposed to waiting until the project is ready to ship and is blocked due to security concerns. When you take on a Security as Code mentality, you are codifying collaboration directly where your development teams are working. Security as code lifts up Development and Security teams together to allow each to focus on their core strengths. Security testing expands on best in class coding practices to add to the standard suite of tests to not only include functional and integration testing but also security focused testing. Static analysis for security vulnerabilities can be implemented on each commit or pull request. Permission boundaries can be checked to verify they cannot be crossed. APIs can be tested to ensure they’re meeting authentication and authorization requirements. Security testing meets your developers where they already are, providing them immediate feedback on each and every commit. Vulnerability scanning at every level of your architecture across your pipeline can verify that each section of your application and deployment is secured against known vulnerabilities. Source code can be scanned for vulnerable libraries. For example, applications can be scanned for susceptibility to XSS and SQL injection. Containers can be scanned for vulnerabilities in individual packages and for adherence to best in class practices. Full scanning of test, staging and production environments can be done continuously and automatically. Scan early and scan continuously to verify your expected security controls are in place and so that you can find issues sooner rather than later. User and data access policies codify governance decisions that can then be reviewed by anyone in your organization. These policies can be standardized, reducing the toil necessary to constantly monitor and maintain one off requests. Authorization can be offloaded to external libraries allowing your Dev teams to focus on core features. Security teams now have a central repository to work directly with developers to monitor and review authorization, allowing the entire company to move faster without breaking core security and compliance requirements. Historically, organizations consisted of separate and siloed Development, Operations and Security teams. Dev teams followed waterfall development and more often than not, so did the deployment from one team to the next. Dev teams finished a project and marked it as code complete. At this point, Ops teams would then be responsible for actually getting it into production. Security teams were unfairly given a reputation for saying ‘no’ to everything and were maybe informed of this process at some point, but were often the last to know. This operating model often pitted teams against each other with tension from one step to the next and between each of the teams. This lack of collaboration also led to long release cycles, cost overruns and ultimately delays to delivering new features and functionality that would scale and be secure. Each team had differing goals and collaboration suffered. As businesses moved to the cloud, adopted a microservices-centric architecture, and began pushing the envelope on release frequency, this operating model started to change completely. Development and Operations teams began to work together in a DevOps model. Infrastructure as a service allowed for the popularization and widespread use of Infrastructure as Code (IaC). Resources no longer needed to be specified out months in advance, ordered and physically racked in data centers. Instead, programmatic APIs could be utilized to create brand new resources on demand. Those resources could be automatically scaled up or down. Infrastructure could now be completely created and managed using code. IaC removed the friction and toil associated with teams manually provisioning and managing fleets of servers, databases, operating systems, containers and at this point, all infrastructure associated with software applications. Dev and Ops team are no longer separate teams, but rather working together to build and scale applications together. Security as Code builds off the gains that these organizations have seen from IaC. Security as code similarly sees a migration to security and policy as code to remove the toil and friction associated with securing software in an IaC mindset. Security and policy as code began with standard software testing of areas like permission boundaries. These unit and functional tests were Security as Code before being labeled as such. Security as Code also arose out of the desire for automation from internal and external red teams and pentesters to automate all of the things. Known as DevSecOps or DevOpsSec, this methodology has become the way organizations can enable collaboration, agility and security, early and often across their entire infrastructure. Security as Code Benefits When moving to a Security as Code model, there are a number of key benefits that are realized across the organization. One of the key benefits and early drivers was fostering collaboration and enabling agility between and among Dev and Security teams. Another key benefit has been visibility for many teams across the organization. Finally, codifying both security and policy simplifies management and reduces toil across the organization. Greater Collaboration: As Dev teams moved to agile workflows, Security teams were often left behind still operating in a waterfall methodology, being brought in at the very end. Dev teams were quickly iterating and ignoring or subverting security processes that hadn’t been yet updated. Security teams that quickly recognized the benefits of agile methods started working directly with Dev teams to meet them where they were. This naturally led to collaboration when they both began to work on shared problems. No longer were they working on orthogonal problems with different motivations, they were working together, directly on the same code base, making sure tests passed before code moved to the next step. Improved Morale: Another problem that arose in organizations was that many teams outside of Security and Compliance had very little visibility into their decisions. Dev teams hoped for an approval and were distraught with what seemed like constant no’s. As security and compliance requirements become codified, there is no longer a question as to why a decision is made, it’s clear from the code. For example, if you have integrated Kubernetes with Open Policy Agent (OPA), you can codify the users and groups that have direct access to each Kubernetes cluster. This allows you to set consistent policy that corresponds to service ownership instead of ad-hoc permission requests. If security is fully baked into your pipeline, there are fewer surprises and last minute blocks when it’s code complete. Increased Visibility: Security as Code helps simplify and centralize user and data access reducing toil and further providing visibility. Access and policy changes can now be tracked, and requests for changes can be self service. For example, you may be using Terraform to manage IAM resources for your cloud provider. By tracking IAM changes in source code, anyone can now see all permissions and can make a pull request directly to the Terraform repo to request changes. When you centralize your decisions to a declarative policy engine, you no longer need to make the same decision over and over again in separate systems. Long gone are the scattered policies of authorization to scattered applications. Shorter Release Cycle: When you integrate security requirements early on in design and development, issues can easily be addressed resulting in increased velocity. Dev and Security teams are no longer trying to address minor to complex to systemic issues after a new feature or functionality is “code complete”. With the advent of Security as Code libraries, application development can be decoupled from the fraught process of implementing your own custom authorization. For example, by integrating with OPA, developers can enable Role Based Access Controls (RBAC) in only the time it takes to enable the integration. Traditionally, this would have required multiple sprints from the Security, Product and Development teams to understand the requirements, what RBAC is, development time and finally full code review. Developers can focus on their core strengths and speed up application development. Additionally, as Security teams continue to adopt this approach, they will begin to adopt or develop their own libraries and tooling to further speed up releases by providing resources to ensure that applications are secure by default. Better Security: When looked at holistically, each test, scan or policy that you can integrate, early, often and continuously, will find problems sooner so they can be addressed before others find them. Undertake this approach for all sorts of add-on benefits, but ultimately we’re all in this together to better secure the data we all care about. Security as Code with Cyral The principles of Security as Code and API-first have been at the core of design and development at Cyral. We have embraced cloud-first, everything as code and API-first design to meet our customers where they are. Our commitment to Security as Code starts first with building a security product that is developer friendly. We have designed our product to naturally fit into existing development workflows. Our application can be easily deployed as part of your testing, staging and production environments to enhance tracing and security at each step of the way. No matter your setup, we have developed options to fit your deployment requirements. We have focused on IaC options from Terraform to Helm and more to support your existing workflow. To truly be able to expand to all existing workflows, we have built our product based on API-first principles. We recognize that Cyral is only one part of your existing toolset and so we have built out dozens integrations across the stack from notifications to logging to issue tracking and more. Cyral’s focus will continue to be on data layer security and advanced data tracing across any number of data repositories available. One of the key components of security as code is to integrate security directly into your CI/CD pipeline, bringing security testing directly and automatically as your application moves from code commit to production. For each step of the pipeline, Cyral will enable advanced tracing and consistent authentication and authorization. Cyral completely supports this model and recommends integrating Cyral in every environment to fully take advantage of our advanced security and tracing capabilities. Cyral’s application comes with out of the box templates to support your IaC workflows and install our sidecar in your infrastructure, the way you deploy the rest of your infrastructure. Cyral can be integrated into your CI/CD pipeline and can be deployed in dev, staging and production along with your application code, ensuring that all data layer activity from every application is automatically observed, controlled and protected. By starting with Cyral in your dev environment, users can now also measure and validate that data layer performance and control do not regress with each new release. Cyral’s advanced tracing provides full visibility into what your users and your applications are doing, allowing you to triage and find issues quicker. Cyral also utilizes Security as Code for data and access policy decisions. We have integrated with Open Policy Agent (OPA), the standard for “policy-based control for cloud native environments”, as the basis for our policy engine. OPA allows our users to write declarative policy for granular access to data repositories and the data that is contained within them. Cyral users write their declarations for user and data access in YAML. In the backend we then use this as a data input to our prewritten Rego queries to verify adherence to policy. By implementing it this way, Cyral remains performative and allows our customers to write configs in a markup language they’re likely already using. YAML also encourages all levels of internal stakeholders to be able to review, edit and comment on policy based code. Writing policy as code with YAML means that you don’t need to be an engineer to contribute. Cyral is fully committed to supporting Security as Code with our customers, and helping them improve their agility and reduce risk. Our client can have a policy repo in Github, so when they push a new version of policy to their repo, a Github Action is called to automatically update policy in their Cyral deployment. Any runtime changes in the application, as it gets promoted in the continuous delivery pipeline (for example, Spinnaker) from testing to canary to production, get tracked through metrics and traces generated by Cyral and routed to the team’s monitoring and logging platforms, such as Datadog and Jaeger or an ELK stack. If any alerts are generated they can then be sent to the messaging and issue tracking systems, such as Slack, Jira and Pagerduty. By integrating Cyral into your full pipeline, any risks and vulnerabilities are caught as soon as possible and, and all applications promoted to production by the Development team come with built in access control policies, which can be reviewed by the Security teams if necessary. Together, we have implemented a full CI/CD security and policy as code pipeline in production.
Web Security: Web Access Policy (Cloud Management) Web Security Admins can customize and create web access policies. If you haven’t yet been assigned the role of Web Security Admin, you should talk to your Account Administrator. Once you’re Web Security Admin, follow these steps to get started with Web Access Policy. - Enable Web Security to turn on the Web Security view.Select, and then go to theManageService SetupOverviewWeb Securitypanel and selectEnable. - Select, and then selectManageWeb SecurityWeb Access PolicyAdd Policy.Here, you can view and customize your web access policies. To create a new policy, selectAdd Policy. - Use this table to guide you as you put together your policy.Web access policies are enforced from top to bottom. Blocked applications and URLs always supersede applications or URLs that you allow.Decide:Action to take:What's the purpose of this policyGive your policy a name. Optionally, you can add a description, tags, and a schedule for your policy. Giving your policy a descriptive name and a meaningful description of its purpose makes it easier to manage later on not just for you, but for other admins as well. Tags can help you group policies with similar characteristics. Schedules can help you manage policies that need to be enforced at regular intervals.Where and for whom your policy is enforcedIn theSourcesection, define traffic to enforce based on its source.Users- Add users and groups of users whom your policy affects.Advanced Settings- You can enforce traffic based on the deployment type: Device- Add a device posture profile to use device state information such as whether a device is jailbroken for policy enforcement.What gets blockedIn theBlocked Web ApplicationsandBlocked URL Categoriessections, add applications and URL categories to block - Focus on unsanctioned and risky applications that do not have legitimate use in your network and malicious websites.What’s allowedIn theAllowed Web ApplicationsandAllowed URL Categoriessections, add sanctioned applications and URL categories to explicitly allow for enterprise use. You can even restrict access to certain features within an allowed application. For example, you may want to allow Gmail, but block access to chat or calls within Gmail. - Explicit Proxy - Remote Networks - Review the following: - The default settings adhere to best practices and provide a good level of protection, but you can customize them if you’d like. Security settings are applied globally. - Use the policy objects available to help you build out your policy. - SelectPush Configat the top right corner of your screen.APushwindow opens. - Enter a description if you’d like, and then selectPushto push your new policy and settings to the cloud for enforcement.If you’re an Account, App, or Instance admin, thePushwindow you see may look like the second image above. Just be sure to select the checkbox forWeb Security, and then selectPush. Recommended For You Recommended videos not found.
Threat Simulation – DNS Threat Simulation – Client Signatures (User Agent) AI-Hunter v3.7.0 Is in the Wild! Threat Simulation – Long Connections What Is Threat Hunting and Why Is It so Important? – Video Blog Setting up a Secondary AI-Hunter Console for Disaster Recovery Want to See What Port Is Most Commonly Used in a Packet Capture File? – Video Blog Raspberry Pi Network Sensor Webinar – Q&A AI-Hunter v3.6.1 Is in the Wild! How to Catch Data Exfiltration With a Single Tshark Command – Video Blog AI-Hunter v3.6.0 Is in the Wild!
Traditional network security tools are now obsolete as AI takes over Network security has always been a job that requires non-stop attention, especially for platforms that deal with sensitive and financial data. Online casino operators have had to allocate an inordinate amount of resources to combatting the threat of cyberattacks, but they are now getting a great deal of support that makes the task easier. Artificial intelligence (AI) makes cybersecurity much more refined and complete, pushing out traditional network security tools that are now becoming obsolete. For years, online casinos, and other online companies, had to rely on firewalls, proxies and anti-virus software as the primary means to thwart cyberattacks. However, as effective as they have been, there has been a shift in the dynamic of what constitutes a cyber threat over the past several years that weakens the ability of these tools to stop an attack. Just as cybercriminals use cutting-edge technology to lead their attacks, online casinos are using it to stop them. AI is making the job easier, using machine learning to instantly analyze a potential threat and stop an attack before it happens. Traditionally, stopping a threat was possible only through a reactive response, but AI is leading the way for proactive detection and preventive responses to stop a platform from being compromised. Certain AI tools are able to analyze baseline behavior and identify anomalies that might be an indication of a zero-day attack. Other tools constantly analyze user behavior and raise an alarm if there is a deviation from the expected action. These solutions have already proven to be better at protecting iGaming platforms and, as AI technology continues to improve, so will the protection it offers.
The top five Android threats SECURITY| June 14, 2012, 1:36 p.m. Sophos has revealed the extent of malware targeting Android mobile phones, by analysing detection statistics from its Sophos Mobile Security app. This data was taken from installations of the application on Android smartphones and tablets in 118 different countries around the world. SophosLabs' research revealed the top five most commonly detected malware on Android are: 1. Andr/PJApps-C - 63.4% 2. Andr/BBridge-A - 8.8% 3. Andr/Generic-S - 6.1% 4. Andr/BatteryD-A - 4.0% 5. Andr/DrSheep-A - 2.6% Others - 15.1% 1. Andr/PJApps-C. When Sophos Mobile Security for Android detects an app as Andr/PJApps-C it means that it has identified an app that has been cracked using a publicly available tool. Most commonly these are paid for apps that have been hacked. They are not necessarily always malicious, but are very likely to be illegal. 2. Andr/BBridge-A. Also known as BaseBridge, this malware uses a privilege escalation exploit to elevate its privileges and install additional malicious apps onto Android devices. It uses HTTP to communicate with a central server and leaks potentially identifiable information. These malicious apps can send and read SMS messages, potentially costing the mobile owner money. In fact, it can even scan incoming SMS messages and automatically remove warnings that you are being charged a fee for using premium rate services it has signed the user up for. 3. Andr/Generic-S. Sophos Mobile Security generically detects a variety of families of malicious apps as Andr/Generic-S. These range from privilege escalation exploits to aggressive adware such as variants of the Android Plankton malware. 4. Andr/BatteryD-A. This "Battery Doctor" app falsely claims to save battery life on an Android device. But it actually sends potentially identifiable information to a server using HTTP, and aggressively displays adverts. 5. Andr/DrSheep-A. This is an Android equivalent of the desktop tool Firesheep. It can allow malicious hackers to hijack Twitter, Facebook and Linkedin sessions in a wireless network environment. "The volume of malware that Sophos discovered highlights that mobile security is a real and growing problem, especially on Android," says Brett Myroff, CEO of Sophos distributor NetXactics. "Criminals are creating more and more targeted malware for different platforms. Smartphone users need to realist that security is no longer limited to PCs; mobiles and tablets are also at risk if not sufficiently protected." A new version of Sophos's free anti-virus for Android is available from: https://play.google.com/store/apps/details?id=com.sophos.smsec MORE SECURITY NEWS RSA research reveals blind spots in threat detectionRSA, The Security Division of EMC, has released the results of a new Threat Detection Effectiveness Survey. Read More Banking security in Africa reaching a tipping pointEntersekt CEO Schalk Nolte looks at the growing security risks for banks and financial institutions in Africa and explains why complacency is no longer an option. Read More Nigeria loses N127bn to cyber thievesBetween 2013 and 2014, the activities of criminals who chose the cyber space as their platform of operation has cost Nigeria a whopping N127billion. Read More Staying one step ahead of information security threatsThe need to continually adapt to an increasingly hostile environment has resulted in a significant change from the familiar security measures that kept us “comfortable” a mere five years ago. Read More ESET warns of new Facebook threatESET has warned of malicious links being sent to Facebook users under the title: “My video”, “My first video”, or “Private video”. Read More What you need to know about WhatsApp’s end-to-end encryptionUsers of the instant messaging service WhatsApp will now benefit from more secure conversations after the company announced it has turned on full end-to-end encryption. Read More Arbor Networks’ on-premise DDoS solution wins gold for best security hardwareArbor Networks Inc., the security division of NETSCOUT (NASDAQ: NTCT), has announced that Info Security Products Guide, the industry’s leading information security research and advisory guide, has named Arbor Networks as the winner of four of its Global Excellence Awards. Read More Students unveil fingerprint car starting moduleIt is often said that necessity is the mother of inventions and this may be deemed true with finger print motor starter, a new innovation from Uganda developed by software engineering students of Makerere University. Read More Remember governance in the rush to next generation platformsEnterprises are actively looking to next generation application and analytics platforms, but they need to be aware that the platforms alone cannot assure quality data and effective data governance, warns KID. Read More Forgot password? Behaviour metrics authentication in mobile devicesRSA, The Security Division of EMC, says the future is truly about choice and providing secure authentication that makes it easy for users to comply with security policies. Read More FEATURED STORYNigeria satellite resources to boost economic growth The Management of NigComSat says Nigeria’s satellite resources are primed to help the nation in accelerating robust ICT infrastructure to boost economic development for the country. BEST READ NEWS IN DEPTHIBM Opens First Cloud Data Centre in South Africa IBM is opening a new IBM Cloud Data Centre in Johannesburg, South Africa. The new cloud center is the result of a close collaboration with Gijima and Vodacom and is designed to support cloud adoption and customer demand across the continent. COMPANY NEWSRSA research reveals blind spots in threat detection RSA, The Security Division of EMC, has released the results of a new Threat Detection Effectiveness Survey.Networks Unlimited event highlights SimpliVity's new Omnistack Software Leading data protection focused distributor invites hyperconverged vendor to discuss disaster recovery and data protection.MTN and Ericsson lead the way with first LTE-U network To future proof its business, MTN SA, together with Ericsson, has successfully trialled LTE-Unlicensed (LTE-U) at MTN’s flagship channel store in Morningside, Johannesburg.
Penetration testing is a method of evaluating the security of a computer system or network by simulating an attack by malicious actors I perform both Web application and infrastructure penetration testing and my integrated approach means I don’t just give you a list of problems but help you solve them and address their root causes now and in the future. Penetration testing of Internet-facing applications and infrastructure is an essential necessity for any business. Penetration testing gives you the assurance that all the hard work you have invested in designing and implementing secure infrastructure or applications has paid off and your product or service won’t fall apart when subjected to malicious activity. Traditional penetration testing concludes with the delivery of a report at the end of the penetration test. It lists identified issues and makes recommendations and you are left to address the issues. No wonder many organisations find it difficult to actually improve security of their IT by conducting penetration tests alone - it takes much more to consistently operate secure services or release secure applications than just a penetration test. After I issue the penetration test report we can work with your technical team to address both the specific security issues as well as their root causes. Following the conclusion of the penetration testing engagement I issue a detailed internal report with my findings and recommendations, as well as a certificate of testing that can be shared with third parties such as customers, investors or regulators - provided no significant or material security issues were identified or all such issues were fully addressed and re-tested. Web Application Penetration Testing Penetration testing of Web applications involves identification of security weaknesses and vulnerabilities caused by insecure coding practices, misconfiguration and bugs. It is usually performed on a test instance of the application but can also be performed on live instances in certain cases. The penetration testing process involves intercepting, analysing, modifying and generating specially crafted malicious and/or invalid HTTP requests to identify and exploit vulnerabilities that may exist in the application, such as the ones defined by the industry-standard sources such as OWASP Top 10 Web Application Security Risks: 1. Injection. Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. 2. Broken Authentication. Application functions related to authentication and session management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities temporarily or permanently. 3. Sensitive Data Exposure. Many web applications and APIs do not properly protect sensitive data, such as financial, healthcare, and PII. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the browser. 4. XML External Entities (XXE). Many older or poorly configured XML processors evaluate external entity references within XML documents. External entities can be used to disclose internal files using the file URI handler, internal file shares, internal port scanning, remote code execution, and denial of service attacks. 5. Broken Access Control. Restrictions on what authenticated users are allowed to do are often not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and/or data, such as access other users’ accounts, view sensitive files, modify other users’ data, change access rights, etc. 6. Security Misconfiguration. Security misconfiguration is the most commonly seen issue. This is commonly a result of insecure default configurations, incomplete or ad hoc configurations, open cloud storage, misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems, frameworks, libraries, and applications be securely configured, but they must be patched/upgraded in a timely fashion. 8. Insecure Deserialization. Insecure deserialization often leads to remote code execution. Even if deserialization flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks, injection attacks, and privilege escalation attacks. 9. Using Components with Known Vulnerabilities. Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defenses and enable various attacks and impacts. 10. Insufficient Logging & Monitoring. Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems, and tamper, extract, or destroy data. Most breach studies show time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring. API Penetration Testing More and more applications depend on publicly accessible Application Programming Interfaces (APIs) to provide their core functionality as well as to integrate with and/or extend other applications and data sources. With all the versatility and features of APIs come potential security weaknesses and vulnerabilities, some of which can be critical, and many of which can be identified and addressed through penetration testing. However, effective penetration testing of APIs requires complete, up to date and accurate API specification and documentation which would allow a penetration tester to generate valid API requests which then can be used to effectively penetration test the API and its endpoint(s) by generating malicious variations of the original valid request. It is therefore important to ensure that whoever is penetration testing an API needs to be able to generate valid API requests before it can be tested. Having an accurate and complete API specification in the right format is key. To obtain the best results from a penetration test of an API the API specification should meet the following good practice requirements: The API specification must be in a standard format such as OpenAPI and machine-readable notation such as JSON or YAML The API specification must be complete, i.e. should not be missing any required methods / parameters / headers / etc The API specification must be accurate, i.e. it should match the actually deployed services accessible on the relevant endpoint(s) The API specification must not contain unnecessary, extraneous or unused methods / parameters / headers The API specification must correctly specify relevant endpoint URL(s) and specify secure transport ('https://') The API specification must specify authentication / authorisation requirements clearly and unambiguously and specify how any relevant tokens or credentials can be obtained The API specification must specify all required and optional parameters, indicating them as such, and the format in which they must be specified (e.g. 'date': 'DD-MM-YYYY' or 'DDMMYY'?) If there is more than one version of the API the specification must indicate the version of the API and the correct endpoint(s) for that version A number of tools exist which can be used to create, edit, validate and share API specifications, such as: When you have a complete and accurate API specification use an appropriate tool, such as OpenAPI Validator or Swagger Inspector above, to validate it. If the tools identify any issues with the specification they must be addressed before penetration testing. Once the API specification has been validated, share it with the penetration tester alongside a set of sample/test credentials and values for all parameters that cannot be obtained from the API itself. Infrastructure Penetration Testing Infrastructure penetration testing is a generic term covering testing of operating systems, network services, network devices and other targets. Its objective is to identify vulnerabilities and misconfigurations that can be exploited to obtain unauthorised access to data, systems or hosted applications. Specific testing activities and methodologies may differ depending on the scope and objectives of the infrastructure testing engagement but most engagements involve the following stages: 1. Identification and enumeration of targets of testing 2. Reconnaissance and information gathering 3. Identification of vulnerabilities, weaknesses or misconfiguration 4. Testing and exploitation of identified vulnerabilities 5. Post-exploitation activities 6. Reporting and recommendations to address the identified issues All of the above is performed in strict conformance with the client requirements taking into account the scope and the objectives of testing as well as any applicable technical, legal or organisational restrictions. Understanding your options When it comes to commissioning a penetration test you will need to decide whether you require a black, grey or white box penetration test. The type of testing chosen will decide the amount of time and effort required as well as the level of security assurance obtained. Black box testing is the usual type of testing – it gives basic assurance that is usually sufficient in most cases. White box testing provides the maximum possible assurance as it involves additional testing activities including review of design, architecture and source code, while grey box testing is midway between the black and white box testing in terms of assurance. The type of testing chosen determines how much time and effort is required and the extent of your own team’s involvement in the testing process: whereas with black box testing your team’s involvement is limited to provision of a test instance of your application or specification of infrastructure to be tested, with grey or white box testing documentation, meetings and access to source code would have to be arranged. The time required to test particular application or infrastructure depends on its size and complexity, as well as the type of testing. Most penetration testing engagements are black box tests and usually take about a week to complete.
The problem with IT environments is that they are built to operate in relatively static configurations, meaning adversaries can easily study any organization’s static systems and networks and execute their attacks. Moving Target Defense (MTD) assumes that all systems will be compromised at some point, so it seeks to make systems more difficult to breach by constantly changing the attack surface (IP addresses, operating systems, software versions, and configurations). Automated Moving Target Defense (AMTD) is the evolution of this strategy, leaning in towards an automated version that can be incredibly effective. Gartner is encouraging the market to focus on this promising new prevention-based strategy. DHS warns that the static defense approach most organizations apply leaves them vulnerable and recommends embracing proactive security solutions such as AMTD. Unlike other traditional cybersecurity tools that focus on detection and remediation, AMTD is a proactive solution. The idea behind AMTD is to move away from the static paradigm and create moving targets that constantly shift and change, disrupting the traditional attack path used by adversaries. AMTD and Cyber Deception : The Way Forward Gartner has mapped out an AMTD Evolution Spectrum that shows how this dynamic approach might transition to a fully automated solution in 10 years. Deception techniques are part of the “future of cyber defense”, according to Gartner, with the clear objective to make adversaries’ lives more difficult and allow organizations to take back control of the situation. AMTD and deception techniques often go hand-in-hand in cybersecurity strategy, as deception creates environments that alter the attack surface from the attacker’s point of view, providing a perceived path of least resistance to achieve their goals and objectives. This lures them into a thoughtfully designed trap, pushing them to execute malicious activity and expose their tactics, techniques and procedures. Four Ways to Use Deception Technology in AMTD One of the four elements that makes up an ATMD strategy, according to Gartner, is the “use of deception technologies”. AMTD has various applications across different domains to enhance overall cybersecurity posture against advanced persistent threats, and CounterCraft’s sophisticated deception platform is an integral part to implementing them. Read on to find out how AMTD works in the following use cases, and how deception technology supports its implementation: AMTD can enhance the security of cloud environments by continuously shuffling and reassigning virtual resources, from machines to containers, across different physical hosts. CounterCraft in the cloud: CounterCraft supports deploying deception at multiple layers in cloud environments and has automatic cloud host management capabilities. The PlatformTM configures real vulnerable virtual machines (AWS EC2, Azure VMs, Digital Ocean Droplets, and more), altering the attack surface from the attacker’s point of view, luring them into exploiting these services to gain control and move laterally. The Platform then collects adversary intelligence and sends high-fidelity alerts to users’ security tools. AMTD techniques can dynamically alter system configurations, routes, or access controls, making it difficult for attackers to understand and exploit system vulnerabilities. CounterCraft in ICS/OT: CounterCraft can be deployed in ICS/OT networks across critical infrastructure, manufacturing and oil & gas plants, and supports deploying deception at multiple layers in ICS/OT environments (PLC/RTU, IEDs and controllers, HMI systems, applications, databases and file Servers and breadcrumbs). CounterCraft can deploy an OPC server, HMI, and emulated PLCs, which then works to lure adversaries in and engage them by scanning for open ports known to exist on vulnerable OPC servers. The security team is alerted from the moment the adversary begins their initial reconnaissance and is able to observe as the adversary drops a payload with the intent to complete their mission. Host and Endpoint Security AMTD techniques can be applied to individual hosts and endpoints, randomizing system attributes, such as operating system versions, software configurations, or user privileges. By frequently changing these attributes, AMTD can mitigate the impact of zero-day vulnerabilities and limit an attacker’s ability to exploit specific system weaknesses. CounterCraft in endpoint & hosts: CounterCraft is able to create credible decoy SWIFT terminals, creating uncertainty and confusion for the adversary. The portal itself has a login function that corresponds to the distributed credentials and decoy users that have been planted in the Active Directory. Adversaries are tempted to exploit these vulnerable hosts while the security team observes their attack vectors, paths and TTPs, learning in real time from their behavior. AMTD can be used to dynamically change network configurations, such as IP addresses, port numbers, or network paths, making it harder for attackers to identify and target specific network components, thereby reducing the risk of successful attacks. CounterCraft in internal networks: CounterCraft can be deployed in an organization’s VLAN by configuring a similar VLAN to their production with vulnerabilities that are attractive from an adversary’s point of view. This allows organizations to identify exploitable vulnerabilities (even new ones) in software used across the corporate network, and gain advanced insight into production service weaknesses. Benefits of Deception as Part of an AMTD strategy Above, we have seen how deception plays a critical role in AMTD’s comprehensive approach to cybersecurity, allowing organizations to stay ahead of novel threats and safeguard their assets and reputation. The integration of these deception techniques as part of an AMTD security infrastructure has several benefits. - Detect threats early: As soon as there is activity in a decoy breadcrumb or server, a high-fidelity alert is delivered to the security team in about 20 seconds, providing the team with situational awareness, threat hunting capabilities, and the ability to respond quickly and mitigate the damage caused by the attack. - Deflect adversaries: CounterCraft’s real IT environments make it difficult for attackers to distinguish between what’s real and what’s fake, increasing the chances of them interacting with decoys and honeypots. This provides valuable intelligence on attackers’ tactics and techniques. - Reduce of risk: Deception reduces the risk of real assets being compromised, as attackers are deflected to decoys and honeypot servers, which do not contain valuable data. - Be cost-effective: CounterCraft customers reduce adversary dwell time from 100+ days to just hours. Reducing attacker dwell times results in less impact and a much lower cost, as breaches with lengthier dwell times tend to be proportionally more expensive. Reducing dwell times by using deception technology results in a cost of breach savings of approximately 51% ($4.35M cost of a data breach in 2022). - Collect user and device behavior: By analyzing adversary behavior on a system or network, it makes it easier to identify potential threats and take action to prevent them thanks to proactive threat hunting capabilities. As you can see, AMTD is a proactive approach that ensures organizations can detect and respond to attacks before adversaries reach the production environment. And, more importantly, it allows teams to gather intelligence on attackers’ tactics and techniques. This is one of the reasons why Gartner forecasts AMTD as “the future of security”. CounterCraft’s solution provides rich adversary-generated threat intelligence in seconds and adds a valuable data stream to EDR/XDR/SIEM/SOAR and AI-based detection systems. CounterCraft fills the gap where other security systems fail.
Although in broad terms Clearswift is undoubtedly a technology firm, we always try and use business language rather than technical talk when discussing cybersecurity and how we can help organizations. Over the last few months, we’ve published a series of blog posts that explain certain cybersecurity terms – we’ve already looked at Adaptive Redaction, Deep Content Inspection, Information Governance Server, and Optical Character Recognition, and now turn our attention to lexical expression qualifiers (LEQs). Put simply, LEQs help us improve detection rates when performing keyword scans using lexical expressions on data. How False Positives Impact DLP Locating the right data and assessing the potential threat within it is one of the principal tenets of cybersecurity. The type of data that needs protecting will vary between different organizations, but typically it will include bank account numbers, passport numbers, addresses, and other personally identifiable information (PII). Data Loss Prevention (DLP) solutions have emerged to find and prevent sensitive or confidential data from unauthorized sharing inside and outside the network. DLP works by recognizing numerical or alphabetical sequences depending on the type of identifier that it is being asked to detect. Some of these identifiers have a basic format, take an Arkansas driving license number for example, which is a number between 4 and 9 digits (1000-999,999,999) – a very wide spread of numbers considering the state has a population of just over 3m. If you were trying to detect Arkansas driving license numbers, then any number between 1000-999,999,999 would be a match and a potential false positive. Once this data has been ‘detected’, then the communication might be automatically blocked until IT can review it. Any event triggered with a DLP solution needs to be investigated, so false positives can be highly time-consuming and costly to an organization, especially if they occur in significant numbers. The first stage to reduce the number of false positives detected is to refine the query to look for more than one item that determines its true nature. For Arkansas driving license numbers this could include looking for an accompanying zip code (71601 to 72959). In this case we would add a regular expression (7[1-1][0-9][0-9][0-9]) to the policy rule. The next stage is to qualify the data even further. Introducing Lexical Expressions and Lexical Expression Qualifiers Because of the additional burden on security and IT teams, technology has evolved to help mitigate false positives. Built into all core Clearswift email and web products are pre-configured, standard lexical expressions that match general lexical patterns such credit card or passport numbers for example. When it comes to other specific values that need detecting, then LEQs are used as a method to validate ‘true’ information found against an external data source such as a database. If we look at the healthcare industry as an example, patient data is often shared between doctors and hospitals to facilitate the care being provided. Here, the transmission of the data should always be encrypted, and in the case of North America, the HIPAA act mandates the use of encryption for transferring of patient data. To ensure the right data is encrypted, we can inspect the data for attributes that will identify a patient’s PII. Ideally we would look for something unique like a patient record number, but if that happens to be a 10-digit number, then telephone numbers or part numbers might also generate false positives hence the need to further qualify the data we are looking for. To do this, we can import a snapshot of the patients’ details that are serviced by the district. This LEQ file is then indexed and hashed for security. When keyword search routines look for a patient record number in the data, we can use the LEQ to confirm whether the 10-digit number detected is actually valid. A policy rule can be configured to require three or more matches with the additional information from the LEQ file, such as the patient’s name, zip code and social security number, before permitting the data to be encrypted and sent securely. The more information that can be verified through LEQs, the more the system can be sure of a policy match and automatically apply the appropriate action, reducing the amount of manual intervention required. Minimizing False Positives We aim with all our products and solutions to not only make an organization’s defense against data loss water-tight, but also to ensure communications and collaboration continue uninterrupted. The use of LEQs continues this tradition, ensuring the data seen really is the data being searched for, and as a result, we reduce the number of false positives, free up valuable IT resources and ultimately keep data safe and compliant.
International Journal of Computer Applications Cognitive Radio (CR) is a promising technology in ad-hoc networks to solve the problems that result from the limited available spectrum and the inefficiency in the spectrum usage by utilizing the existing wireless spectrum advantageously. When the licensed primary user is not using the spectrum, the available channels are allocated for the unlicensed secondary users. An increasing numbers of security threats are being identified when the idea of cognitive radio becomes reality. One such threat is the possible presence of selfish secondary users who try to occupy all available channels.
Create microzones for finer granularity separation: The granularity of separation based on zoning approaches is typically fairly coarse. For example, it usually limits a set of users and their machines to accessing other sets of machines and their content. File system and user identity are also fairly course in that they typically allow access to all of the content readable by any given user within the available file systems. Finer granularity becomes a complex matter and is poorly supported at the enterprise level by technical mechanisms. Microzones created by virtualization and encryption offer a finer grain of separation by allowing each virtual machine to run a subset of programs with access to a subset of file system areas over a portion of time. It does so with finer granularity than other zoning approaches, and is more rapid to deploy in small special purpose applications. Create microzones to limit risk aggregation: Risk aggregation may come in many forms. In addition to reduction of aggregated risk by limiting the total accessible information from a given location (e.g., a zone, subzone, machine, storage area, etc.) risk is also aggregated over time by the presence of mechanisms available over time. Microzones implemented in virtual machines, for example, only operate while the machines operate, and thus the shutdown of the machines may limit the period of exposure. Create microzones to allow safer untrusted applications, content, and access use at lower cost: In many cases, and particularly in less controlled (non-high consequence) environments, use of untrusted content, applications, or access are desired, but the cost of creating and operating completely separate environments for such use exceeds the potential utility gained. Microzones provide the means to allow reasonable containment of content, applications, and access at relatively low cost. While this is not suitable for high consequence environments because of the surety limitations associated with microzoning today, it is highly suited to many other situations where substantial risk reduction is desired, particularly for limited time frames. A typical example is the creation of a microzone for shared access sessions involving changes to a common document, where the effort is (1) time limited (i.e., only for the period of the on-line meeting), (2) requires access to a limited storage area (i.e., the documents being reworked and related content), and (3) requires access from multiple locations (e.g., via encrypted tunnels from multiple remote desktops). For environments configured for the purpose, this requires only a few minutes at the start of a session and substantially reduces risks associated with remote desktop access and related file sharing. The typical process involves starting a virtual machine (VM), loading copies of the supporting (read-only) files into the VM internal storage area, loading the content to be modified into a shared file area (or remotely mounting an appropriate micro-zoned file system via encrypted tunnel), and running the desktop sharing application and related editing applications from inside the VM. At the end of the session, the VM is shut down (not retaining any changes made to is), thus (1) eliminating residual effects of its operation outside of the shared or remotely accessible storage area, and (2) leaving the modified document available for use. Don't use microzones: Microzones are not suited to many cases. For example, and without limit, when large bodies of information from a wide range of a-priory unpredictable areas are needed, when very high performance is required, and when high surety is required. Its use for effective protection also requires a level of maturity, typically Defined or above, although repeatable may be adequate for low consequence environments, and it takes time and money to operate effectively. Implement microzone computation with virtual machines within machines: This involves the creation of virtual machine (VM) environments within computers to act as microzone operating environments. In most cases, these exist either on desktops or within what is commonly called cloud computing environments. VMs in internal clouds are used just as in external clouds, but can be operated within internal zones and subzones to provide adequate and required surety levels based on internal rather than external criteria. Within user machines, VMs tend to act as temporary work areas for limited purposes. They can be popped up and shut down and granted limited access during use, retaining some or none of the content or changes associated with their periods of operation. Implement microzone communication within subzone and zone via encrypted tunnels: If communication is required within a microzone, the typical approach is to link machines to other machines (including storage machines) using encryption. This typically consists of remote file system mounts through encrypted tunnels such as via SSL or SSH tunnels. For example, from a VM, a remote area of a disk may be mounted for use via an SSH tunnel using the sftp protocol and a local mount daemon that attaches the mounted filesystem area to a virtual disk on the VM. This can be done read-only or read-write, so that specific filesystem areas can be micro-managed. However, the overhead of such micro-management may be problematic and expensive if overdone. Implement microzone storage with sub-file-system storage areas within subzone and zone storage: Within a storage area (e.g., a disk, disk array, file server, etc.), zones or subzones may be further segregated, and presumably access restrictions associated with those areas will be in place for the microzones contained within them. But additional separation may be applied within microzones for finer granularity of control. For example, remote disk mounts may be to subdirectories within file systems within partitions within disks of a larger file system area within a zone and subzone. The smaller the area mounted, the less access and effect the microzone will have. Implement microzone storage with encrypted storage at the microzone level: In cases where encryption is mandated or otherwise desired, encryption can be used to create microzone areas that are restricted to those with the appropriate keys. By only using (or having available) the keys within the microzone, microzones can limit access to microzone content. However, all of the issues of key management then get extended to the microzone level, a granularity at which most enterprises have difficulty operating encryption effectively. The operating environment supports only low granularity separation and higher granularity is desired by clients or management: In most modern operating environments, separation is at most to the level of the user identity. However, in many cases, clients want their content separated from other clients, and many managers see no reason that access to all should be granted when access to only subsets is desired. The complexity of management becomes too high for these cases at the operating environment level, but at the microzone level, it may be achieved if adequate discipline is applied. For example, file system mounts to portions of different areas may allow finer granularity the operating environments not using this approach may support. Risk aggregation justifies separation at finer granularity that the operating environment supports: In some cases, particular operations induce undesired risks, even though risk aggregation at the zone and subzone layer are normally adequate to the need. For example, when using desktop sharing for presentations or interactions with others, even if the risk associated with the internal user is not excessive, the aggregated risk of the internal user's access is too high for the remote less trusted user (e.g., client, potential client, etc.) to be granted as part of remote desktop or other shared access, even under any supervision that internal user may provide. In these cases, microzoning for the purposes of the remote access or collaborative effort may be suitable to the need without requiring the creation of special zones, subzones, etc. for each potential future use or enterprise management of each such instance. Untrusted content or applications are desired but otherwise too risky: In many cases, temporary use of content or applications is highly desirable, but they are not trustworthy enough to grant access to the subzone and zone the user is otherwise able to access. In these cases, a microzone may be created for these risk operations, and less access may be granted for those uses through a microzone. Risk aggregation, granularity, or content and application trust limitations stem from the computational mechanism (e.g., endpoint, server, etc.): In cases where trust limitations stem from the computational (as opposed to communications or storage) mechanism, virtual machines may be a reasonable solution. For example, internal users may be responsible for testing a wide range of software products for a particular use, only one of which will ultimately be used and fully vetted for its purpose. Since many of these are readily downloaded from the Internet for testing purposes, rather than create a whole testbed for this use, a microzone may be created using virtual machines and restricted storage access for the purposes of the testing, and removed afterward without residual side effects. Again, a VM can often be used to create a microzone for such a purpose at low cost. Microzones extend beyond a single machine and its local storage: In some cases, a microzone may need to extend beyond a single machine. For example, if remote storage, sharing, or other similar interaction is desired, some methods of communication while maintaining microzone separation may be called for. In these cases, encryption between endpoints may be highly desirable. Microzones access only a small and severable portion of the otherwise accessible file system area: In many cases, the microzoning strategy extends to restricting access to file systems and other storage. In these cases, a storage restriction mechanism may be important to microzone implementation. For example, mounting of small portions of file system, or access to embedded filesystems stored as single files within other filesystems may be a usable approach to controlling such access. Encryption is required for the microzone content but not other subzone content: Encryption if sometimes desired for specific content, but not all content within a zone or subzone. For example, if a few clients demand encryption while other do not, and no other reason exists for encryption of a whole zone or subzone, a microzone encryption strategy may be applied at limited cost to supporting these clients without the cost or complexity of a larger scale encryption approach.
Android Malware Detection through Machine Learning on Kernel Task Structure Type of DegreePhD Dissertation DepartmentComputer Science and Software Engineering Restriction TypeAuburn University Users MetadataShow full item record The popularity of free Android applications has risen rapidly along with the advent of smart phones. This has led to malicious Android apps being involuntarily installed, which violate the user privacy or conduct attack. According to the survey of Android malware from Kaspersky Lab, the proportion of malicious attacks for Android software has increased by a factor of two. Therefore malware detection on Android platforms is a growing concern because of the undesirable similarity between malicious behavior and benign behavior, which can lead to slow detection, and allow compromises to persist for comparatively long periods of time in infected phones. Meanwhile a huge number of malware detection techniques have been proposed to address the serious issue and safeguard Android systems. In order to distinguish malicious apps from Android software, the traits of malware pplications must be tracked by the software agent or build-in programs. However, These researchers only utilize a short list of the Android process features without considering the completeness and consistence of the entire information. In this dissertation, we present a multiple dimensional, kernel feature-based framework and feature weight-based detection (WBD) designed to categorize and comprehend the characteristics of Android malware and benign apps. Furthermore, our software agent is orchestrated and implemented for the data collection and storage to scan thousands of benign and malicious apps automatically. We examine 112 kernel attributes of executing the task data structure in the Android system and evaluate the detection accuracy with a number of datasets of various dimensions. We observe that memory- and signal-related features contribute to more precise classi cation than schedule-related and other descriptors of task states listed in our paper. Particularly, memory-related features provide ne-grain classi cation policies for preserving higher classi cation precision than the signal-related and others. Furthermore, we study and evaluate 80 newly infected attributes of the Android kernel task structure, prioritizing the 70 features of most signi cance based on dimensional reduction to optimize the e ciency of high-dimensional classi cation. Our experiments demonstrate that, as compared to existing techniques with a short list of task structure features, our method can achieve 94%-98% accuracy and 2%-7% false positive rate, while detecting malware apps with reduced-dimensional features that adequately abbreviate online malware detections and advance oine malware inspections. To strength the online framework on a parallel computing platform, we propose a Sparkbased Android malware detection framework to precisely predict the malicious applications in parallel. Apache Spark, as a popular open-source platform for large-scale data, has been used to deal with iterative machine learning jobs because of its e cient parallel computation and in-memory abstraction. Moreover, malware detection on Android platforms requires to be implemented in a data-parallel computation platform in consideration of the rapid increase of data size of collected samples. We also scrutinize 112 kernel attributes of kernel structure (task struct) in the Android system and evaluate the detection precision for the whole datasets with di erent numbers of computing nodes on Apache Spark platform. Our experiments demonstrate that, our technique can achieve 95%-99% of the precision rate with a faster computing speed by a Decision Tree Classi er on average, the other three classi ers lead to a lower precision rate while detecting malware apps with the in-memory parallel-data. We have designed a Radial Basis Function (RBF) network-based malware detection technique for Android phones to improve the accuracy rate of classi cation and the training speed. The traditional neural network with the Error Back Propagation method cannot recognize the malicious intrusion through Android kernel feature selection. The RBF hidden centers can be dynamically selected by a heuristic approach and the large-scale datasets of 2550 Android apps are gathered by our automatic data sample collector. We implement the algorithms of the RBF network and the Error Back Propagation (EBP) network. Furthermore, compared to the traditional neural network, the EBP network which achieves 84% of the accuracy rate, the RBF network can achieve 94% of the accuracy rate with the half of training and evaluation time. Our experiments demonstrate the RBF network can be used as a better technique of the Android malware detection.
On the surface, it seems like cybersecurity professionals would be focused on designing stronger barriers to attack and establishing firmer encryption standards, but at its core, the field is driven by data. In particular, large amounts of network data is sorted to differentiate between normal network traffic and threat activity. This data can help developers craft better barriers against intrusions, assess the consequences of an attack, and boost post-attack recovery. Simply put, cybersecurity development work wouldn’t be possible without an influx of data, and much of that data is collected and managed via machine learning practices. One reason only machine learning technology is capable of assessing network traffic for threats and anomalies is that they must process months of traffic data to identify underlying patterns. If a human programmer were to do this, even with digital assistance, it would take years and then they would still need to build a solution that could identify new incoming threats based on that information. When machine learning approaches the problem, though, not only is it able to complete the pattern recognition phase swiftly, but then the machine itself is capable of identifying new risks in real time. In addition to speed, one of the primary advantages of using machine learning for cybersecurity development is that, despite the fact that businesses generally understand the importance of proactive security practices, without AI assistance, most businesses can’t actually execute such a strategy. This isn’t for a lack of trying, of course; even the most assiduous human workers simply can’t keep pace with network traffic or interpret data as quickly as computers can. Businesses that operate with a machine learning-backed security system, then, can put basic security initiatives in place, but these will never be advanced enough to be considered truly proactive. To aid businesses that are invested in advancing their cybersecurity practices, machine learning experts have stepped up to the plate and are now offering Infrastructure-as-a-Service (IaaS) programs that bring AI into offices at all levels, democratizing access to such technology. Such security practices involve monitoring network activity, to be sure, but also address privileged access management (PAM) concerns, as internal breaches are a leading cause of data theft. At present about 60% of companies still manage these access credentials manually, but by upgrading their management using IaaS, companies can monitor access more closely, enhance multi-factor authentication practices, and monitor system use by privileged users to ensure best practices. Stopping internal breaches is a surprisingly challenging process. Another key application of machine learning and infrastructure development for cybersecurity is in the area of mobile technology. With fewer workers onsite and remote access increasingly important, being able to manage how remote devices interact with primary network systems is of growing importance – and that’s why the Department of Homeland Security (DHS) is researching mobile threat detection (MTD). Regarding AI, DHS research is interested in several different applications. These include behavioral profiling, code emulation, and intrusion protection, among others. All of the applications central to this technology, though, are designed to protect high-value data no matter where it’s accessed from or used. While there are plenty of established machine learning solutions working in the cybersecurity space, one of the best ways to innovate in this area is by turning to hackers and other independent groups, and that’s where hackathons come into the picture. Hackathons, task-based gatherings at which independent developers and coders work to test new ideas and solve problems, are on the frontlines of cybersecurity today. So what happens at a hackathon? One hackathon sponsored by Wallarm through the machine learning platform Kaggle, for example, wants participants to develop more nuanced solutions for identifying malicious network activity, and Wallarm is hardly alone. A growing number of hackathons offer competitive, low-cost ways for businesses to acquire new solutions, while offering coders small prizes. It’s much more cost-effective than hiring a coder, and often more innovative. Standard computers and security experts are easily overwhelmed by the amount of data involved in cybersecurity work, making machine learning solutions vital to their success, but those solutions need to a collaboration between human developers and their machines. As computers lead the way by processing massive amounts of information, our systems are becoming safer. The next step is expanding access to these advanced systems to businesses and organizations of all sizes. This is the beginning of something new and exciting for cybersecurity. AI MACHINE LEARNINGMACHINE LEARNING
Prof. Bhavani Thuraisingham, PhD Fellow of ACM, IEEE, AAAS, NAI, IMA Erik Jonsson School of Engineering and Computer Science The University of Texas at Dallas, USA Title: Trustworthy Machine Learning and Its Applications in IoT Systems The collection, storage, manipulation, analysis and retention of massive amounts of data have resulted in new technologies including big data analytics and data science. It is now possible to analyze massive amounts of data and extract useful nuggets. However, the collection and manipulation of this data has also resulted in serious security and privacy considerations. Various regulations are being proposed to handle big data so that the privacy of the individuals is not violated. Furthermore, the massive amounts of data being stored may also be vulnerable to cyber attacks. Furthermore, Artificial Intelligence Techniques including machine learning are being applied to analyze the massive amounts of data in every field such as healthcare, finance, retail and manufacturing. Machine techniques are being integrated to solve many of the security and privacy challenges. For example, machine learning techniques are being applied to solve security problems such as malware analysis and insider threat detection. However, there is also a major concern that the machine learning techniques themselves could be attacked. Therefore, the machine learning techniques are being adapted to handle adversarial attacks. This area is known as adversarial machine learning. In addition, privacy of the individuals is also being violated through these machine learning techniques as it is now possible to gather and analyze vast amounts of data and therefore privacy enhanced data science techniques are being developed. Finally, Machine Learning techniques have to be fair and not discriminate. They also have to produce accurate results. Integrating Machine Learning with features like Security, Privacy, Integrity and Fairness have come to be known as Trustworthy Machine Learning, With the advent of the web, computing systems are now being used in every aspect of our lives from mobile phones to smart homes to autonomous vehicles. It is now possible to collect, store, manage, and analyze vast amounts of sensor data emanating from numerous devices and sensors including from various transportation systems. Such systems collectively are known as the Internet of Transportation, which is essentially the Internet of Things for Transportation, where multiple autonomous transportation systems are connected through the web and coordinate their activities. However, security and privacy for the Internet of Transportation and the infrastructures that support it is a challenge. Due to the large volumes of heterogenous data being collected from numerous devices, the traditional cyber security techniques such as encryption are not efficient to secure the Internet of Transportation. Some Physics-based solutions being developed are showing promise. More recently, the developments in Data Science are also being examined for securing the Internet of Transportation and its supporting infrastructures. Our goal is to develop smart technologies for a Smart World. To assess the developments on the integration of Machine Learning and Security over the past decade and apply them to the Internet of Transportation, the presentation will focus on three aspects. First it will examine the developments on Trustworthy Machine Learning including aspects of insider threat detection as well as the advances in adversarial machine learning. Some developments on privacy aware and policy-based data management frameworks will also be discussed. Second it will discuss the developments on securing the Internet of Transportation and its supporting infrastructures and examine the privacy implications. Finally, it will describe ways in which Trustworthy Machine Learning could be incorporated into the Internet of Transportation and Infrastructures. Biography of Dr. Bhavani Thuraisingham Dr. Bhavani Thuraisingham (aka Dr. Bhavani) is the Founders Chair Professor of Computer Science, the Founding Executive Director of the Cyber Security Research and Education Institute, and the Co-Director of the Women in Cyber Security and Women in Data Science Centers at the University of Texas at Dallas. She is also a visiting senior research fellow at Kings College, the University of London since 2015 conducting research on the foundations of IoT and was a Cyber Security Policy Fellow at the New America Foundation focusing on workforce development 2017-8. She is also a Member of the Faculty of Computer Science at the University of Dschang Cameroon, Africa since 2021 giving lectures (pro-bono) on Trustworthy Machine Learning, She is an elected Fellow of several prestigious organizations including the ACM, the IEEE, the AAAS and the NAI (National Academy of Inventors). Her research, development and education efforts have been on integrating cyber security and data science/machine learning for the past 37 years including at Honeywell Inc., The MITRE Corporation, the National Science Foundation, and Academia. Dr. Bhavani has received several awards including the IEEE Computer Society’s 1997 Technical Achievement Award, ACM SIGSAC 2010 Outstanding Contributions Award, 2011 AFCEA Medal of Merit, 2013 IBM Faculty Award, 2017 ACM CODASPY (Data and Applications Security and Privacy) Lasting Research Award, the 2017 Dallas Business journal Women in Technology Award, and the 2019 IEEE ComSoc Technical Recognition Award for Communications and Information Security. She has delivered around 200 keynote and featured addresses, and over 100 panel presentations, authored 16 books, published over 130 journal articles and over 300 conference papers. Dr. Bhavani received her PhD in Computability Theory from the University Wales, UK and the prestigious earned higher doctorate (D.Eng) form the University of Bristol, England for her published work in Secure Data Management. Dr. X. Sean Wang Fellow of CAAI and CCF, ACM Member, IEEE Senior Member School of Computer Science, Fudan University, China Title: Cloud Computing from a Task Centric Perspective Cloud computing has become basic infrastructure that provides the computing needs for all sorts of applications. However, the model of cloud computing services seems still based on securing a single cloud service provider before launching tasks. This model requires a deep understanding of what services to acquire. In this talk, we try to argue for a task centric view, namely to envision a system that allows an understanding of the computing needs of a task (e.g., via automated or artificial annotation) and provides an automated process of acquiring or accepting suitable synchronous or asynchronous services from perhaps heterogeneous computing providers. Biography of Dr. X. Sean Wang X. Sean Wang is Professor at the School of Compute Science, Fudan University, a CAAI and CCF Fellow, ACM Member, and IEEE Senior Member. His research interests include data analytics and data security. He received his PhD degree in Computer Science from the University of Southern California, USA. Before joining Fudan University in 2011 to be the dean of its School of Computer Science and the Software School, he served as the Dorothean Chair Professor in Computer Science at the University of Vermont, USA, and as a Program Director at the National Science Foundation, USA. He has published widely in the general area of databases and information security, and was a recipient of the US National Science Foundation CAREER award. He's a former chief editor of the Springer Journal of Data Science and Engineering. He's currently on the steering committees of the IEEE ICDE and IEEE BigComp conference series, and past Chair of WAIM Steering Committee.
FortiGuard Labs Threat Analysis Blog Jaff ransomware was originally released in the spring of 2017, but it was largely neglected because that was the same time that WannaCry was the lead story for news agencies around the world. Since that time, Jaff ransomware has lurked in the shadows while infecting machines worldwide. In this FortiGuard Labs analysis, we will look into some of the common ransomware techniques used by this malware, and how it represents the ransomware’s infection routine in general. Like many ransomware variants, Jaff ransomware commonly arrives as a pdf attachment. Once you open the attachment, it displays a one-line document along with a pop-up message asking whether you want to open an embedded (See figure 1). If you choose to open the file, that’s where the fun begins. It then launches an embedded document that contains instructions on how to remove Macro protection from your document (See Figure 2). The yellow strip at the top of the document includes the button “Enable Content,” which enables any macro within the document to execute. And of course, we all already know that this document contains macros. In fact, this document contains lots of macros (See figure 3), only one of which downloads the Jaff binary file. The following is a list of macros found in this variant: · Challenge(sender As String, e As Integer) · Subfunc(MethodParam2() As Byte, MethodParam As String) · Lipochanko(a, b) · Vgux(strComputer As Integer) · Assimptota4(FullPath As String, NumHoja As Integer) · Assimptota6(FullPath As String, NumHoja As Integer) · WidthA(Dbbb As String, bbbJ As String, Optional system_ofADown_Sexote As String) · Function system_ofADown_ProjectSpeed() · SaveDataCSVToolStripMenuItem_Click(e As Integer) · RepackOK(sheetToMove As String, sheetAnchor As String, Assimptota6OrAfter As String) The privateProbe() macro contains the code that downloads the Jaff binary file (See figure 4). We can do a simple substitution to manually generate the download link. From the encoded links, we can replace the letters “RRDD” with “om”, and splitting the links from every occurrence of the word “Nbiyure3”(See figure 5). Decryption, Redirection, and Garbage Code After downloading the binary file, Jaff ransomware starts decrypting part of the malware code. It uses a simple code redirection routine as an anti-analysis trick to stretch the time it requires to analyze the actual malicious code. In between code execution, it uses garbage code that is not relevant to the malware execution. Figure 6 shows different blocks of code executed in a random fashion. Each pass from this group of codes decrypts a DWORD value, and then continues until it decrypts the rest of the malware. It also shows the numbered directions of code execution for the decryption routine. Once we remove the garbage code and irrelevant blocks, we can see that only three blocks are used for the actual decryption. Figure 7 shows the same group of blocks highlighting the actual relevant code used for the decryption routine. It turns out that the actual decryption routine is just a simple XOR. Resolving the APIs After decrypting the malware code, most of the API names the malware uses are still hidden. Hiding API names is a malware feature designed to conceal them from an Antivirus scanner. It helps to avoid being detected based on a combination of known APIs used by malware. There are different ways of hiding the APIs—some malware uses encryption, and some uses hashing. The latter is used by Jaff. Following are the steps necessary to resolve the APIs. Initially, it looks for the “kernel32.dll” string by parsing the PEB (Process Environment Block) structure. It computes the hash of every module name found in PEB and compares it to the hash for “kernel32.dll”. Once it finds a match, it then grabs the location for kernel32.dll and starts resolving the rest of the needed APIs in a similar fashion. After resolving all the needed APIs, Jaff performs process hollowing. This is a malware feature that instead of dropping another executable file and executing it, overwrites part of the original malware code in memory with its new executable code. In order for Jaff to do process hollowing, it clears the memory blocks of the current process using UnmapViewOfFile API. It then re-allocates the same memory blocks using VirtualAlloc API, and changes its protection to PAGE_EXECUTE_READWRITE by calling the VirtualProtect API. A series of REPE MOVSB instructions are used to copy the contents of the malicious code to the newly allocated memory blocks. As we have seen so far; the decryption, code redirection, API resolution, and process hollowing are just part of the wrapper code designed to hide the actual malicious binaries. After executing all those codes, the malware now is ready to show its true nature. Interestingly enough, using the wrapping technique allows you to basically upgrade the wrapper code without the need to upgrade the malicious executable. In this way, you can quickly deploy a new version of the malware that avoids previously used detection parameters. Let’s now look at where the different parts and features of the embedded executable code are located. The resource section of the malware contains the key block. It also contains the encrypted list of extension names, a download URL link, and the ransom note (See Figure 8). The key block is a 260-byte key found in one of the resources. It is used to decrypt the contents of different resources within the section. Figure 9 shows a snapshot of the code that fetches a resource, the resource that contains the key-block, and the 260 bytes key. One of the resources contains the decrypted list of extension names. Figure 10 shows the encrypted and decrypted list of extension names of the files that the malware will try to search for and encrypt (See also Figure 11). Jaff’s ransom note is stored in three different formats; html, regular text, and image (bmp). The text and html versions are found in the resource section, while the bmp version is generated using the same text. Figure 12 shows the html version of the ransom note in encrypted and decrypted form, and the location in the resource section where it can be found. To generate the ransom note in image form, Jaff uses the following combinations of APIs. Figure 13 shows a sample of the ransom note in image form. The decrypt ID is dynamically generated and added to the image. In this particular variant of Jaff ransomware, this image is set as the desktop’s wallpaper after the infection. File Encryption Routine After all the complex code wrapping and initialization, the main malicious payload that encrypts files is the simplest routine. To encrypt the file, Jaff searches for files in a given directory, followed by checking if the extension name of the file is found in the list (see Figure 11). Next, it renames the file with a .jaff extension and opens it for encryption. It then encrypts the file using a call to the CryptEncrypt API (see Figure 14). After all possible files are encrypted, the malware drops the ReadMe.bmp, ReadMe.html, and ReadMe.txt versions of the ransom note in the given directory. One of the factors that affects the populariy of a ransomware is the timing of when it is released. Jaff was released at almost the same time as WannaCry, thus killing its dream of stardom in an instant. Or maybe, it was released intentionally at that moment to add stealth to its infection. Either way, we should always be ready for any malware or ransomware by keeping our defenses regularly updated. Know your vulnerabilities – get the facts about your network security. A Fortinet Cyber Threat Assessment can help you better understand: Security and Threat Prevention, User Productivity, and Network Utilization and Performance. Read about the FortiGuard Security Rating Service, which provides security audits and best practices.
Learn how the built-in tools work and how to configure them to best effect Want to start a conversation with a stranger? Ask about the most outrageous spam message he or she has ever received. Because everyone who has an email account gets spam, this icebreaker is almost guaranteed to work—although the answer you get might embarrass one or both of you! Many Exchange administrators use third-party mail filters, but Exchange Server 2003 has a surprisingly good set of built-in spam-reduction tools. In fact, Microsoft uses these tools as a first line of defense for its own systems, and Microsoft employees will generally tell you that they don't get much spam. Is your organization getting the most out of Exchange 2003's built-in tools? To answer that question, you need a thorough understanding of the tools, how they're applied, and your configuration options. The Exchange Antispam Process Exchange 2003 incorporates several types of antispam protection, including blocking mail from specific IP addresses or senders and filtering with the Microsoft Exchange Intelligent Message Filter (IMF). Exchange applies filtering techniques in a predictable-sequence. The process starts when a remote system opens an SMTP connection to the Exchange server. If the server is accepting connections, the following types of filtering take place: - Connection filtering—Exchange applies checks based on the sender's IP address and other data, such as whether the SMTP conversation has the correct syntax. - Sender and recipient filtering—Exchange checks for the sender's IP address on any blacklists and checks the sender and recipient addresses against its lists of permitted users and blocked users. - Content filtering—Exchange passes the message through the IMF (if it's enabled). Exchange then submits the message to the mailbox store, where it may be acted upon by the store (according to options set in the IMF) or by the client-side Outlook junk mail filter. Setting Up Filtering You control which types of filtering are applied to your Exchange servers in two ways. First, you can use the Message Delivery node in Exchange System Manager (ESM) to specify filtering settings for the IMF, sender and recipient filtering, connection filtering, and Sender ID filtering. Each filtering type has its own tab in the Message Delivery Properties dialog box, as you can see in Figure 1. Second, you can control which filtering mechanisms are applied to each SMTP virtual server in your organization. Open the SMTP virtual server properties and click the Advanced button on the General tab, then click Edit to display the Identification dialog box that Figure 2 shows. After you select the filtering types you want to apply, you must restart the SMTP virtual server for the options to take effect. The settings applied for each selected filter are drawn from the configuration data for each virtual server. Having independent filtering options for each virtual server gives you flexibility in how you filter inbound messages. It's important to understand that although connection, sender, and recipient filtering happen at different times, they're all part of the SMTP conversation, so they're not really discrete operations. Connection filtering is a catch-all term that includes several steps Exchange takes when accepting an SMTP connection. The connection begins when a remote server connects to the Exchange SMTP service. Exchange receives the sender's IP address and performs several checks. - Exchange checks the sender's IP address against the lists of allowed and blocked IP addresses, which are stored in Active Directory (AD). The SMTP virtual server is smart enough to notice updates to the address lists without a service restart. If the IP address appears on the global accept list, the message is exempted from further checks. If the address is on the global deny list, the connection is immediately dropped. No nondelivery report (NDR) is generated, but the sending server receives a 5.7.0 Access Denied error message. Two sets of IP address lists are used for this step: The first set is the global accept and deny lists, defined on the Connection Filtering tab of the Message Delivery Properties dialog box, and the second set is the pair of accept and deny lists that are specific to the individual virtual server. - If you've enabled reverse DNS lookups, Exchange uses the IP address to perform a reverse DNS check to verify that a DNS name is associated with the IP address. If no result is found, Exchange drops the connection. - The Exchange server accepts the sender's HELO/EHLO message. If it's incorrectly formed, Exchange drops the connection. - Exchange accepts the sender's MAIL FROM verb, which provides what's known as the envelope FROM (or P1) address. This address is who the sender claims to be, but Exchange makes no effort to verify it. However, Exchange does check the P1 address against the list of blocked senders. If the address is on the list, Exchange drops the connection with a 5.1.0 Sender Denied error message; otherwise, the Exchange server sends a 250 OK status message and the SMTP conversation continues. If at any time during this process the Exchange filter mechanism sees SMTP verbs in the wrong sequence (e.g., DATA before MAIL FROM) or with obviously malformed arguments, Exchange drops the connection. Sender and Recipient Filtering At this stage, Exchange has accepted the P1 sender address and is ready to accept the list of message recipients. Before it does so, however, Exchange performs Realtime Blackhole List (RBL) checks. If you've configured Exchange with RBLs, Exchange checks the first RBL in the list by querying the RBL provider with the sender's IP address in reverse. For example, if the sender's IP address is a.b.c.d and the RBL provider is example.com, Exchange queries the RBL provider for d.c.b.a.example.com. If the RBL provider has a record for the sender's IP address, the provider returns a status code indicating which RBL contains the IP address; otherwise, it returns a Host Not Found error message. Exchange queries each RBL on the list until it either finds a match or runs out of RBLs. When Exchange finds a match, the Exchange SMTP server returns a 5.7.0 error with an optional custom error message you can use to explain why the email message bounced. You can also define a list of recipients for whom RBL checks should not be performed (e.g., your postmaster or abuse mailboxes). Messages to those recipients are accepted, even if the sender's IP address is on an RBL, and are flagged to exempt them from further checks. Next, Exchange accepts the RCPT TO verb, which specifies the message recipients. Like senders, recipients are checked against two lists: a list of blocked recipients (for whom all mail is refused) and a list of allowed recipients (for whom all mail is accepted). If the recipient doesn't appear on either list, what happens next depends on the Filter recipients who are not in the Directory setting on the Message Delivery Properties dialog box's Recipient Filtering tab that Figure 3 shows. When the check box is selected, Exchange compares the recipient alias against AD to ensure that the recipient is valid; messages to invalid recipients are returned with an NDR. When the recipient is valid or if the check box isn't selected, Exchange accepts the DATA verb, which contains the message contents. The DATA conversation includes transmission of the message headers, which contain the sender's reply address (aka the P2 address). If this address appears on the sender filter list, Exchange drops the connection and deletes the message. If the SMTP connection is still active and Sender ID checking is enabled, Exchange looks for a Sender Policy Framework (SPF) or Sender ID resource record on the DNS server. When a record for the purported sending domain is found, Exchange checks it against the sender's P1 address. If no record exists or if the record's information doesn't match the sender's P1 address, Exchange processes the message according to the setting selected on the Sender ID Filtering tab of the Message Delivery Properties dialog box, which Figure 4 shows. Because the Sender ID result factors into the IMF's decision about whether a message is legitimate, Microsoft recommends that you use the default of accepting messages even if they fail the Sender ID check. Few organizations have set up Sender ID records to date; as more organizations do, you'll want to consider enforcing Sender ID checks by deleting or rejecting messages from domains that don't have Sender ID records. (See Additional Resources for more information about Sender ID and IMF.) Content filtering is performed by the IMF. Unlike most third-party solutions, the IMF has few administrator-adjustable settings, depending instead on a corpus of filtering data that's produced by analyzing hundreds of thousands of messages sent to MSN Hotmail users. The IMF assigns messages two numerical scores: the spam confidence level (SCL) and the phishing confidence level (PCL). The higher the number, the more likely the message is illegitimate: An SCL of 9 means the IMF is sure the message is spam, whereas an SCL of 1 means the message is probably legitimate. An SCL of -1 means that the message was exempted from IMF scanning because it came from a trusted IP address or sender, was delivered over an authenticated connection, or came from another user in the same Exchange organization. The PCL is similar to the SCL, but it measures the likelihood that a message is phishing. As Figure 5 shows, the IMF lets you control two threshold values: the gateway threshold, which controls the SCL at which messages are rejected; and the store threshold, which controls the SCL at which messages are accepted but routed to the user's Junk E-mail folder. (There are no corresponding settings for the PCL, unfortunately.) You can also tell Exchange what to do when a message exceeds the gateway threshold: Delete the message, reject it with an NDR, archive it for later inspection, or take no action. Microsoft recommends that you initially set gateway blocking to No Action, then let the filter run for a few days and use the IMF's performance counters to help you decide where to set the gateway SCL. You can make a couple other adjustments to the IMF, although they're not exposed in the UI. For example, you can define a custom word list as an XML file (stored in MSExchange.UceContentFilter.xml in the \exchsrvr\bin\MSCFV2 directory) and adjust the SCL up or down for messages that contain the specified words. This is a useful function, provided you're careful about which words you include. You can also use the CustomRejectResponse registry key to force the IMF to return a custom SMTP response for messages that are rejected (e.g., 550 5.7.0 Die, spammers, die!). There's really nothing else to adjust or set. This simplicity is an advantage for many sites, though some administrators want more granular control over which messages are accepted and rejected. The Future Looks Good Exchange Server 2007 adds several new features. The basic IMF engine is largely the same, but it runs as part of the Content Filtering agent, a component that runs on the Hub Transport or Edge Transport server role to protect your network by filtering messages at the perimeter. Exchange 2007 also introduces Automatic Updates to the IMF's filter corpus; a UI for setting the IMF's custom words list; and a flexible engine for defining rules to block, quarantine, forward, or drop messages according to criteria such as subject, attachment size or type, sender, and recipient. The Edge Transport server role can collect safe and blocked sender lists from individual Outlook mailboxes and aggregate them for perimeter filtering, and there are a host of other minor tweaks. However, these improvements don't change the fact that Exchange already has a robust set of antispam tools that are included at no extra cost. If you're not using them, give them a try to see how well they meet your needs. You might find that the cost of maintaining message hygiene drops significantly without decreasing the degree of protection you get. “The Sender ID Standard,” InstantDoc ID 43917 “Want to Tick Off Spammers? Try Sender ID,” InstantDoc ID 49313 For more about IMF: “Deploying Exchange Intelligent Message Filter,” InstantDoc ID 43151 “The Exchange Intelligent Message Filter,” InstantDoc ID 42682 “Sender ID Technology: Information for IT Professionals” http://www.microsoft.com/mscorp/safety/technologies/senderid/technology.mspx “Exchange Intelligent Message Filter Overview” http://www.microsoft.com/technet/prodtechnol/exchange/downloads/2003/imf/overview.mspx To download the IMF v2 Operations Guide: http://tinyurl.com/g9gsz
Locky is currently one of the top 3 ransomware threats, following closely behind CryptoWall. It's not surprising that this strain has undergone several updates since the beginning of the year, the most recent being discovered on July 12. The Russian Cyber Mafia behind Dridex and Locky ransomware have added a fallback mechanism in the latest strain of their malware created for situations where their code can't reach its Command & Control server. Researchers from antivirus vendor Avira blogged about this version which starts encrypting files even when it cannot request a unique encryption key from the C&C server because the computer is offline or a firewall blocks outgoing communications. Calling the mothership is normally required for ransomware that uses public key cryptography. And actually, if the code is unable to call home to a C&C server after they infect a new machine, most ransomware does not start the encryption process and is dead in the water. Why? The encryption routine needs unique public-private key pairs that are generated by the C&C server for each infection. How does this work? Here is a simplified sequence of events. - The ransomware program generates a local encryption key and uses an algorithm like AES (Advanced Encryption Standard) to encrypt files with certain extensions. - It reaches out to a C&C server and asks that machine to generate an RSA key pair for the newly infected system. - The public key of that pair is sent back to the infected machine and used to encrypt the AES encryption key from step 1. The private key, (needed to decrypt what the public key encrypted), stays on the C&C server and is the key that you get when you pay the ransom and is used for decryption. As you see, a lot of ransomware strains are useless if a firewall detects their attempt to call home and blocks it as suspicious. There is another scenario however... As damage control, organizations also cut off a computer from the network the moment a ransomware infection is detected. They might even take the whole network offline until they can investigate if other systems have also been infected. The silver lining? If someone pays the ransom and gets the private key, that key will work for all other offline victims of the same Locky configuration as well, so expect a free decryptor to become available in the near future. Here is the blog post with the list of 11 things you can do to block ransomware Find out which of your users' email addresses are exposed before the bad guys do. The Email Exposure Check is a one-time free service. We will email you back a report containing the list of exposed addresses and where we found them within 2 business days, or sooner! This shows you your phishing attack surface which the bad guys will use to try to social engineer your users into opening an attachment infected with ransomware. Don't like to click on redirected buttons? Cut & Paste this link in your browser instead:
Brand protection services can help you with trademark infringements, the takedown of infringing domain name registrations, cease and desist orders and further law enforcement if necessary. The term Brandjacking describes the activity of hijacking a brand identity, usually of a well-known brand, to cause some type of damage to the brand’s reputation, brand equity or online identity. Cybersquatting is the practice of the unauthorized registration of a domain name that is identical or very similar to well-known brand names under the hope to resell them to the trademark or name owner for a profit. Domain Backorder refers to a service offered by domain registrars or third-party companies that allows individuals or businesses to reserve or place a request for a domain name that is currently owned by someone else and is about to expire. Domain Spoofing is a type of phishing where a cybercriminal imitates a well-known business, person or brand by creating fake websites or email domains to trick people into trusting them. Domain Squatting is a different word for Cybersquatting. Dropped Domains refer to domain names that were previously registered but have not been renewed by their owners within the domain's expiration period. Typosquatting is a specific form of cybersquatting, which is the practice of registering domain names, in particular using very well-known brand names or companies, under the hope to resell them to the trademark or name owner for a profit. An UDPR Procedure is a conciliation procedure introduced by the Internet Corporation for Assigned Names and Numbers (ICANN) for resolving domain name disputes in which the trademark rights are clearly violated.
Results 1 to 4 of 4 Thread: Understanding Network Security Enjoy an ad free experience by logging in. Not a member yet? Register. - Join Date - Jun 2005 Understanding Network Security I was wondering if anyone might know of good reference material, books websites etc that discuss network security issues in layman terms. So far most of what I have found is a lot like trying to learn Greek, with me understanding little of what I read. I would like to set up a dedicated Linux box as a firewall and would like to have a deeper understanding of the different types of configurations that are possible. Hopefully I will find something that deals with these issues in an understandable way and also discusses the ramifications of one method over another. I run a dual boot system and most of the firewalls I have used on the Windows side are very confusing to me. A lot of the time they give you a pop up that informs you that some cryptically named program is trying to access the network or the internet and wants to know if I want it to or not, 99% of the time I have not idea if it is a legitimate program or not. I realize that this is probably a separate issue (knowing how to identify programs and processes that should have access from those that should not) from setting up a firewall and basic network security but I know that they are related. Also does anyone know if it is possible to build a Linux firewall and run it in a VM? If anyone has any information concerning these issues that you can point me to, I would greatly appreciate it. security has many aspects. It is generally hard to list all of them, but all of them protect you from something. Starting from a single computer you can protect it from: * the users that use it * others that try to access it (and use it like a valid user does) In this case I regard also a software that runs on a computer as a user, not only the user that sits in front of it. So, to protect a computer from potentially dangerous operations you have to make sure nobody gets access to something he doesn't have access to. These can be confidential information (prohibit even read access), data that shouldn't be written to (prohibit write access) or software that shouldn't be run (prohibit execute access). These are some kind of keywords that are pretty common for these kind of things (you can look them up in google): * linux users and groups * linux file permissions * ACL (access control lists) * chroot environments To secure a system from dangerous operations, a user should be tighted by means of access permissions so that he can't ever harm a system or accesses parts that he shouldn't. What actually then he will be able to do, depends strongly on who he is and what he needs to do. For example it makes no sense to restrict access to execute database maintainance to the database administrator user. Now one has mostly secured the system from the inside. Surely there are holes like jailbreaks and such that cannot be stopped as long there is software that has buffer overflow implementations and you'll never get rid of them. You'll need to trust the users on your computer that they will not harm your system. If you can't trust them, don't let them in. And this is what brings me to the next topic. Tighten a computers security by preventing others to access it. The idea behind this is: a bad person that can't access a system can't harm it. Usually you have means of logging into a system. This is cause you want yourself to be able to access your system, otherwise a computer is pretty useless. There are several measures to secure a computers security. One big is: You can nearly accomplish everything (in regard to network security) with a good firewall like iptables. You can secure single computers from other computers within you lan, but you can also protect your lan from the rest of the world. This topic is quite wide spread and you should just read these things. I take these two little tools as the most importants: Things that are (in my eyes) useless: * port knocking If other things come in my mind, I'll post them. Anyway, click yourself through wikipedia starting from one of the above topics. That should give you a good start. - Join Date - Jun 2005 Thanks so much for taking the time to answer my post, it is much appreciated and I will look up the topics that you have mentioned here. I generally have a pretty good idea as to how permissions work under Linux and so far have not had any know issues with any of my Linux boxes, however it has been a different case while running Microsucks OS of the month. I would have given up on running Windows long ago had I not become an older gamer. I need to get a handle on iptables as I presently do not fully understand how to use them correctly and the "fail2ban" I have never heard of that before. I will definately look that one up. My situation is not all that complex, at present just a home network that I like to log in to remotely with my new cell phone running Android and stream in various media that is on my home theater system. I do want to build a good firewall for my network even though it does not have any national secrets on it. Thanks once again for your help and for taking the time to write all that you did. National secrets or not, I generally dislike unauthorized people to view my stuff. Your usecase pretty much matches my system at home, except the streaming stuff. At home I have: * pc (winX) * server (xubuntu) + apache2 + rtorrent + rutorrent + webdav + xbmc; connected to a 47' tv * laptop (ubuntu netbook version) * asus wl-500gp with openwrt on it (works as internet firewall) regarding the server: * xbmc is already set up such that it could stream media, even tough i never tested that. * got a nice remote control to control it even three rooms away (in case I listen music while lying in bed and I do not want to go to the living room ) * port forward active for 22 and 80 * thus apache2 provides webdav and other things, but it is strictly split into private and public such that nobody will see the webdav, even though I can access it from the LAN (central storage of multimedia and other files that should be stored on a RAID) * everything is guarded by fail2ban in case one tries to ddos into the server * teamspeak server and other things that I need (more or less) * each process has its own user and they are safely grouped depending on what stuff they need access to (i.e. group media can access the 3TB raid where all multimedia files are stored in) All this is run with an ATOM dual core of the elder generation (see: Zotac ION ITX) with 3gb ram in. The cpu load never exceeds 0.7 and thus I can say: it just works. Just yesterday I took a look at the fail2ban logs and saw that it was rotated 4 times, thus it does its job by keeping the bad guys out (3 failed ssh logins ==> 15min firewalled ban).
Download Now Free registration required Intrusion detection is used to monitor and capture intrusions into computer and network systems which attempt to compromise the security of computer and network systems. Many intrusions manifest in dramatic changes in the intensity of events occuring in computer networks. Because of the ability of exponentially weighted moving average control charts to monitor the rate of occurrences of events based on their intensity, this technique is appropriate for implementation in intrusion detection systems. - Format: PDF - Size: 270.1 KB
When evaluating cloud providers, it's important to understand who is responsible for cloud security. Since the lines are often blurred, encryption is imperative to keep your data from prying eyes. Retail Cybersecurity Is Lagging in the Digital Transformation Race, and Attackers Are Taking Advantage Retail cybersecurity requires a large-scale transition to cope with new threat vectors, close significant infrastructure gaps, and extend security protocols across new cloud and SaaS platforms. Enterprises Using IaaS or PaaS Have 14 Misconfigured Instances on Average, Cloud Adoption Study Finds A cloud adoption report found that companies that deploy infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) have an average of 14 misconfigured instances running at a given time. Cloud security must be a team effort between providers and customers. The distribution of responsibility depends on the cloud model. Despite its many cost and efficiency benefits, adopting SaaS can introduce new security issues if not managed and tested according to best practices. When adopting PaaS solutions, IT leaders must consider the many security concerns that arise when data is stored and shared using cloud services. Many organizations are choosing to adopt cloud and hybrid cloud architectures to integrate with infrastructure-as-a-service (IaaS) solutions. As clients embrace cloud environments, firewalls come down and new types of users start accessing cloud data, introducing additional security challenges. In the final installment of this series, we focus on S$A attacks as well as some ways your organization can prevent side-channel attacks on its VMs. Discussion of two side-channel attacks meant to retrieve sensitive information from a virtual machine (VM) on the same physical processor package.
The main objective of this project was to define how financial institutions are leveraging artificial intelligence (AI) as part of their cybersecurity strategy. The Capstone team built a framework with offense, defense, and implementation sides. In terms of the project methodology, the team conducted literature review and expert interviews. First, on the offense side, the team deduced 5 key AI-related cyber security attacks: deepfake, phishing, sniffing, distributed denial-of-service (DDoS), and data poisoning. Second, on the defense side, the team adopted two approaches, 1) countermeasures against each attack and 2) overarching countermeasures. As to the first approach, the team concluded that analysis of video and audio, liveness check / biometrics, adaptive authentication, natural language processing, adoptive protection, and AI confirmation of data are the important tools for mitigating cyber security attacks. In terms of the second approach, the anomaly detection needs to be closely reviewed. Lastly, in the implementation side, the team reviewed two corporate venturing strategies, strategic partnership and strategic investment. Joint R&D and venture clients are the representative methods of the first strategy, and acquisition and the accelerator programs are the noticeable methods of the second one. The conclusion of this project is as follows: - The impact of AI on cybersecurity will likely expand the threat landscape, introduce new threats and alter the typical characteristics of threats. - In responding to traditional cyber-attacks intensified by AI, not only AI-related measures but conventional measures are also still effective. - To make tangible results within a limited timeline, developing well-prepared strategic partnership is strongly recommended.
How do i block a computer which is on a network To block the computer, you need to go to Internet> cable modem > my computer> network switch> computers. Select the computer that you want to block and save your settings. You can get detailed steps at http://esupport.trendmicro.com/solution/en-us/1038374.aspx. Unplug the ethernet cable that is connecting the computer to the network. If it is connected wirelessly, then change the password in your router.
# Sensitive Command Token # What is a Sensitive Command token Have you ever wanted a quick alert if an unexpected Windows process runs on a host? This simple Canarytoken allows you to set up a quick alert when you want to know any time a specific command is executed. This token creates a registry key and sends an alert to you in near real-time that the command of interest had been executed. # Creating a Sensitive Command token Head on over to canarytokens.org (opens new window) and select Sensitive command token: Enter your email address, or webhook address along with a reminder that will be easy to understand, as well as the name of the program you want to alert on. then click Create: Download the .reg file and an install it on a Windows 10 or Windows 11 system. You can do this from an Administrative Command Shell. reg import <filepath\filename.reg> # How to use this token Once installed (with admin permissions) you'll get an alert whenever someone (or someone's code) runs your sensitive process. It will automatically provide the command used, computer the command ran on, and the user invoking the command. Ideal candidates are executables often used by attackers but seldom used by regular users (e.g., whoami.exe, net.exe, wmic.exe, etc.). You can use this for attacker tools that are not present on your system (e.g., mimikatz.exe), and if they are ever downloaded and run you'll get an alert! Use a network management tool to deploy across your organization. The alert will display the username and the hostname the command was executed on.
A class of cyber-attacks called False Data Injection attacks that target measurement data used for state estimation in the power grid are currently under study by the research community. These attacks modify sensor readings obtained from meters with the aim of misleading the control center into taking ill-advised response action. It has been shown that an attacker with knowledge of the network topology can craft an attack that bypasses existing bad data detection schemes (largely based on residual generation) employed in the power grid. We propose a multi-agent system for detecting false data injection attacks against state estimation. The multi-agent system is composed of software implemented agents created for each substation. The agents facilitate the exchange of information including measurement data and state variables among substations. We demonstrate that the information exchanged among substations, even untrusted, enables agents cooperatively detect disparities between local state variables at the substation and global state variables computed by the state estimator. We show that a false data injection attack that passes bad data detection for the entire system does not pass bad data detection for each agent.
The Forta Network acts like a giant, shared security camera and alarm system, monitoring public blockchains in real-time for threats, anomalies, security-related events and other noteworthy activity. Put differently, Forta is the “real-time monitoring layer” in the Web3 tech stack. The Network is comprised of two primary components - detection bots and scan nodes. Detection bots are the equivalent of tiny cameras, built by developers and published on the network. What each bot monitors for is determined by the logic written by its developer. Bots vary in complexity, with some monitoring for a single condition (ex: a multi-sig transaction above a certain amount threshold), and others monitoring for a combination of different factors (ex: scam activity using a combination of advanced heuristics and machine learning models). When a bot finds what it’s looking for, it emits an alert. To prevent spam and malicious bots from being published and consuming network resources, developers are required to stake at least 100 FORT on each detection bot they publish. Bots without the minimum stake will be inactive. The other component of the network is scan nodes, and you can think of scan nodes as servers that provide capacity to the Forta Network. Scan nodes are responsible for running detection bots, providing them with blockchain data and publishing any alerts. Anyone can run a scan node as long as they stake the required amount of FORT tokens. Each scan node listens for blocks and transactions from a blockchain. Currently, the Forta Network runs scan nodes for EVM blockchains such as Ethereum, Polygon and BNB Chain (complete list of supported chains here). Each scan node is assigned a set of detection bots to run by the Forta Network. When a new bot is published, it is randomly assigned to one or more scan nodes and begins running shortly thereafter. The scan node collects any alerts reported by the detection bots and publishes them. To hold scan node operators accountable for operating in the best interest of the network, each scan node must be staked with at least 2,500 FORT. Collectively, the detection bots on the Forta Network are generating hundreds of thousands of alerts and other data points every hour. Users can subscribe to alerts from a specific detection bot using the Forta App. They can also browse and search the latest alerts using the Forta App. Also, more technical users can query for alerts using the Forta API to integrate alert feeds right into their own applications.
How to Enable or Disable Content Filtering Applies to: Exchange Server 2007 SP3, Exchange Server 2007 SP2, Exchange Server 2007 SP1, Exchange Server 2007 Topic Last Modified: 2007-03-13 This topic explains how to enable or disable content filtering functionality. By default, in Microsoft Exchange Server 2007, content filtering is enabled on the Edge Transport server only for inbound, unauthenticated messages from the Internet. These messages are handled as external messages. You can disable content filtering functionality in individual computer configurations by using the Exchange Management Console or the Exchange Management Shell. In addition, you can enable or disable filtering of internal messages and external messages by using the Exchange Management Shell. You cannot use the Exchange Management Console. However, as a best practice, you should not filter messages from trusted partners or from inside your organization. When you run anti-spam filters, there is always a chance that the filters will detect false positives. To reduce the chance that filters will mishandle legitimate e-mail messages, you should enable anti-spam agents to run only on messages from potentially untrusted and unknown sources. To perform the following procedures on a computer that has the Edge Transport server role installed, you must log on by using an account that is a member of the local Administrators group on that computer. Also, before you perform these procedures, confirm the following: As described earlier in this topic, content filtering functionality is enabled for external messages. The following procedures demonstrate how to enable or disable content filtering functionality by using the Exchange Management Console or the Exchange Management Shell. The Content Filter agent is the underlying agent for content filtering functionality. It's important to understand that when you perform the following procedures, the content filtering functionality is enabled or disabled, but the underlying Content Filter agent is still enabled. To disable the underlying Content Filter agent, run the Disable-TransportAgent cmdlet. Open the Exchange Management Console on the Edge Transport server. In the console tree, click Edge Transport. In the work pane, click the Anti-spam tab, and then select Content Filtering. In the action pane, click Enable or Disable as appropriate. By default, content filtering functionality is enabled for external messages. The following procedures demonstrate how to enable or disable content filtering for internal and external messages by using the Exchange Management Shell. You cannot use the Exchange Management Console to enable or disable content filtering for internal or external messages. For detailed syntax and parameter information, see Set-ContentFilterConfig. For more information about how to configure content filtering, see the following topics:
[RU1/22] Zeinab Rahal : digital twins for cybersecurity On the research update spring 2022, our PhD student Zeinab Rahal presented his latest results regarding “Digital twins for cybersecurity“. This presentation is a part of our bi-yearly research update events of the chair Cybersecurity for Critical Networked Infrastructures (cyberCNI.fr). More infos on our website https://cyberCNI.fr/ We cordially invite you to contact us for collaborations, partnerships, etc. We are constantly looking for new industry partners to strengthen our profile. Make an appointment to find out more! 5G networks are vulnerable to cyber attacks, and concrete examples of attacks against networks have shown the consequences of exploiting these vulnerabilities. The proposed topic focuses on attacks against 5G networks, with a particular focus on the network slicing attacks. Specifically, it will study the capabilities of attackers who attempt to gain knowledge about the dynamics of a system. Such attackers can carry out sophisticated attacks that may elude current intrusion detection systems. The proposed approach will consist of studying the concept of a digital twin as an auxiliary to the intrusion detection system. More specifically, it will be about proposing solutions to design an adversarial twin adapted to security oversight functions and analyzing their robustness to detect complex and stealthy attacks. To validate the approach, the model will be compared to attacker models with sophisticated intrusion capabilities, based on in-depth knowledge of the system under attack as well as, possibly, of the functioning of the digital twin. About Zeinab Rahal She graduated from the Lebanese International University, Bekaa, Lebanon, with a Master of Science in Computer and Communication Engineering. She worked for several years in the Software Engineering domain. She holds a Research Master in Information Systems and Data Intelligence from the Lebanese University, Beirut, Lebanon. Before starting her PhD studies, she conducted a research internship at Télécom SudParis, working in Internet-of-Things (IoT) planning and Artificial Intelligence (AI). Her research interests include Wireless Communication, Optimization, AI, IoT and Cybersecurity. About the cyberCNI.fr Research Update The cyberCNI.fr (https://cyberCNI.fr/) Research Update (Spring/ Fall) happens once per semester. It is the big status event of the chair Cyber CNI. All works around the chair (PhD students, PostDocs, Engineers, …) are presenting their progress, current works, and next challenges. There are vital discussions with the audience on the topics. It is the perfect opportunity for getting an overview on and discussing what is going on at the chair. From the spring 2022 event on, the Research Updates start with an industrial keynote of one of our partners, giving insights to and showcasing their work. About the chair Cybersecurity of Critical Networked Infrastructures (cyberCNI.fr) The Cyber CNI Chair at IMT Atlantique is devoted to research, innovation, and teaching in the field of the cybersecurity of critical infrastructures, including industrial processes, financial systems, building automation, energy networks, water treatment plants, transportation. The chair covers the full stack from sensors and actuators and their signals over industrial control systems, distributed services at the edge or cloud, to user interfaces with collaborative Mixed Reality, and security policies. The chair currently hosts 6+3 PhD students, 1+3 PostDocs, 11 Professors, 1+1 engineers, and 1 internship student. The chair runs a large testbed that enables applied research together with the industry partners. The industry partners of the current third funding round are Airbus, Amossys, BNP Paribas, EDF, and SNCF. The chaire is located in Brittany, France. Brittany is the cybersecurity region number 1 in France. The chair Cyber CNI is strongly embedded in the cybersecurity ecosystem through its partnerships with the Pôle d’Excellence Cyber (PEC) and the Brittany Region. The chair provides a unique environment for cybersecurity research with lots of development possibilities. - Les AFTERS by Pôle Excellence Cyber - April 7, 2023 - Federated Learning as enabler for Collaborative Security between not Fully-Trusting Distributed Parties - February 10, 2023 - [RU2/22] Anthony DAVID : Virtual Reality for cybersecurity data vizualisation - December 21, 2022
Through a combination of technology and processes a cloud environment can meet the most stringent of security requirements. There are a number of physical access security measures which can be implemented including 24x7x365 onsite security, biometric hand geometry readers on all doors and equipment cages plus around-the-clock CCTV monitoring delivering detailed surveillance and audit logs. Segmenting each client into their own VRF (Virtual Routing and Forwarding) prevents each client from seeing or accessing each other’s network. VRF technology also eliminates the problem of clients having the same IP range as another client. This will also give all client’s their own dedicated public IP range which enhances public security for services such as VPN connection. Through firewalling technologies (such as those provided by Cisco), each client should have their own set of firewall rules which are not shared or impacted by other client configurations. This means firewall configuration and changes are completely independent of other customers and each customer will be provisioned with one or more VLANs to cater for any internal requirements, these VLANs map back to the individual clients VRF. Private WAN networks and physical hardware can be patched into your VLAN at this level. Virtual Server Security VMware is recognised within the industry as the leader in virtual technology platforms and their vSphere offering was specifically built for the cloud. Each customer’s virtual servers are attached to the network via one or more customer-segmented port groups. Each port group ties the servers into the customer allocated VLANS created as part of the cloud network security per customer. Each customer’s Virtual Servers have their own VMDK (Virtual Machine Disk Format) which is a file that represents the drives created as part of the virtual server. This virtual disk contains the server’s operating system, and associated data drives. The virtual server operating system has no visibility of SAN storage or other VMDK’s existing in the environment. Further to this using fibre channels for SAN storage connectivity can help to alleviate security implications associated with other methods of attachment, such as IP storage. Third Party Security Audits Undertaking regular scheduled third party security audits with independent security companies to industry IT security standards will ensure your data security is continually maintained. Implementing stringent ITIL aligned change management process reduces risk by enforcing standard methods and procedures for efficient and prompt handling of changes, while minimising the impact of change on service availability. All change requests shuold undergo a stringent security impact assessment before being approved and implemented. Harbour IT implements all the above security measures and can provide further details to satisfy your specific requirements for compliance with IT risk management legislative provisions and your corporate IT security policy.
What is a LOG file THE LOG file extension is, like the name suggests, associated with log files and the operating computer. Usually, they are expressed as ASCII text files and contain information about running or used programs and time stamps. If a LOG file is currently used by the operating system or a program, it cannot be opened unless the program the user is trying to open the file with allows for read-only mode. Likewise, these files can not be deleted or moved to another directory while in use. LOG files are usually created by the programs themselves to keep track of events or locations and names during an installation. Web servers are able to generate such files as well for the purpose of monitoring traffic and bandwidth usage. In this case, the text log is often times converted into graphs. A file with a LOG extension that may frequently appear on a computer is TVDEBUG.LOG, which is associated with ZoneAlarm. This LOG file grows continuously which makes it necessary to monitor and delete it from time to time. Here's a small, but not exhaustive list of programs that can open LOG documents: - Apple console - Apple Text Edit - Microsoft Notepad - Microsoft Word - OpenOffice Writer
Feature Policy allows you to control which origins can use which features, both in the top-level page and in embedded frames. Essentially, you write a policy, which is an allowed list of origins for each feature. For every feature controlled by Feature Policy, the feature is only enabled in the current document or frame if its origin matches the allowed list of origins. For each policy-controlled feature, the browser maintains a list of origins for which the feature is enabled, known as an allowlist. If you do not specify a policy for a feature, then a default allowlist will be used. The default allowlist is specific to each feature. A policy is described using a set of individual policy directives. A policy directive is a combination of a defined feature name, and an allowlist of origins that can use the feature. 'src': (只在iframe中允许) 只要在src 中的URL和加载iframe用的URL相同,则本特性在iframe中允许, - <origin(s)>: 在特定的源中允许,源URL以空格分割。 Feature Policy provides two ways to specify policies to control features: The primary difference between the HTTP header and the allow attribute is that the allow attribute only controls features within an iframe. The header controls features in the response and any embedded content within the page. You can send the Feature-Policy HTTP header with the response of a page. The value of this header is a policy to be enforced by the browser for the given page. It has the following structure. Feature-Policy: <feature name> <allowlist of origin(s)> For example, to block all content from using the Geolocation API across your site: Feature-Policy: geolocation 'none' Several features can be controlled at the same time by sending the HTTP header with a semicolon-separated list of policy directives, or by sending a separate header for each policy. For example, the following are equivalent: Feature-Policy: unsized-media 'none'; geolocation 'self' https://example.com; camera *; Feature-Policy: unsized-media 'none' Feature-Policy: geolocation 'self' https://example.com Feature-Policy: camera *; The second way to use Feature Policy is for controlling content within an iframe. Use the allow attribute to specify a policy list for embedded content. For example, allow all browsing contexts within this iframe to use fullscreen: <iframe src="https://example.com..." allow="fullscreen"></iframe> This is equivalent to: <iframe src="https://example.com..." allow="fullscreen 'src'"></iframe> This example allows <iframe> content on a particular origin to access the user's location: <iframe src="https://google-developers.appspot.com/demos/..." allow="geolocation https://google-developers.appspot.com"></iframe> Similar to the HTTP header, several features can be controlled at the same time by specifying a semicolon-separated list of policy directives. For example, this blocks the <iframe> from using the camera and microphone: <iframe allow="camera 'none'; microphone 'none'"> Scripts inherit the policy of their browsing context, regardless of their origin. That means that top-level scripts inherit the policy from the main document. All iframes inherit the policy of their parent page. If the iframe has an allow attribute, the policies of the parent page and the allow attribute are combined, using the most restrictive subset. For an iframe to have a feature enabled, the origin must be in the allowlist for both the parent page and the allow attribute. Disabling a feature in a policy is a one-way toggle. If a feature has been disabled for a child frame by its parent frame, the child cannot re-enable it, and neither can any of the child's descendants. It's difficult to build a website that uses all the latest best practices and provides great performance and user experiences. As the website evolves, it can become even harder to maintain the user experience over time. You can use feature policies to specify the desired best practices, and rely on the browser to enforce the policies to prevent regressions. There are several policy-controlled features designed to represent functionality that can negatively impact the user experience. These features include: - Layout-inducing Animations - Unoptimized (poorly compressed) images - Oversized images - Synchronous scripts - Synchronous XMLHttpRequest - Unsized media To avoid breaking existing web content, the default for such policy-controlled features is to allow the functionality to be used by all origins. That is, the default allowlist is '*' for each feature. Preventing the use of the sub-optimal functionality requires explicitly specifying a policy that disables the features. For new content, you can start developing with a policy that disables all the features. This approach ensures that none of the functionality is introduced. When applying a policy to existing content, testing is likely required to verify it continues to work as expected. This is especially important for embedded or third-party content that you do not control. To turn on the enforcement of all the best practices, specify the policy as below. Send the following the HTTP header: Feature-Policy: layout-animations 'none'; unoptimized-images 'none'; oversized-images 'none'; sync-script 'none'; sync-xhr 'none'; unsized-media 'none'; <iframe src="https://example.com..." allow="layout-animations 'none'; unoptimized-images 'none'; oversized-images 'none'; sync-script 'none'; sync-xhr 'none'; unsized-media 'none';"></iframe>
We have URL1 and URL2 and the program will generate URL3. Consider the following case study: URL1 is being tracked to monitor its traffic sources. URL2 is a blog or any other website. Can you write a program that will use URL1 to generate URL3 in a way that URL1 and URL3 will go to the same destination page when clicked. When traffic comes through URL1, TRUE traffic source is revealed when tracked / queried to know where the link was clicked. When traffic comes through URL3, it goes to same destination page as URL1 but URL2 (another blog or website) is seen as the (referrer) traffic source. 1. The program will be installed on a website. 2. It will have admin interface. 3. Admin can login and add as many links as possible to fake the referrer of each link. 4. In the admin, there will be a field to put URL1 and URL2 input fields infront of each other.
"The NICCS [National Initiative for Cybersecurity Careers and Studies] Portal's cybersecurity lexicon is intended to serve the cybersecurity communities of practice and interest for both the public and private sectors. It complements other lexicons such as the NISTIR [National Institute of Standards and Technology Internal Reports] 7298 Glossary of Key Information Security Terms. Objectives for lexicon are to enable clearer communication and common understanding of cybersecurity terms, through use of plain English and annotations on the definitions. The lexicon will evolve through ongoing feedback from end users and stakeholders." National Initiative for Cybersecurity Careers and Studies: https://niccs.us-cert.gov
- The tool is included in the recent versions of Excel and is available as a separate add-in for older Excel versions. - The Power Query attack technique is similar to another exploit that abuses an Excel feature named Dynamic Data Exchange (DDE). Security experts have come up with a method to abuse Microsoft Excel’s Power Query feature. The technique can allow an attacker to run malicious code on users’ systems. The tool is included in the recent versions of Excel and is available as a separate add-in for older Excel versions. What is the purpose of Power Query? Power Query is a data connection technology that can be used to search for data sources, make connections and then shape data (such as remove a column, change a data type or merge tables) as per the requirements. How can Power Query be abused? In their research, a security expert from Mimecast Threat Center has described that the technique used to abuse Power Query relies on creating malformed Excel documents. These malformed documents can then use Power Query to import data from an attacker’s remote server. "Using Power Query, attackers could embed malicious content in a separate data source, and then load the content into the spreadsheet when it is opened. The malicious code could be used to drop and execute malware that can compromise the user's machine,” wrote Mimecast researcher Ofir Shlomo in a blog post. The technique can even bypass security sandboxes that analyze documents sent via email. Striking similarity with DDE exploit The Power Query attack technique is similar to the one that was used to abuse another Excel feature named Dynamic Data Exchange (DDE). The technique was documented in 2017 by SensePost and could be used to distribute malware. What action has been taken? Mimecast has contacted Microsoft to inform them about the issue. The IT giant has declined to patch the issue as it is not actually a vulnerability but just a method which bad actors can abuse a feature to do bad things.
A new version of the ERMAC Android banking trojan has been released which allows the malware to target a wider range of applications to steal account credentials and cryptocurrency from. In addition to new features, ERMAC 2.0 has also seen its price increase from $2,000 to $5,000 per month on dark web forums where cybercriminals purchase access to the malware to use in their cyberattacks. Once deployed, the goal of this trojan is to steal login credentials from unsuspecting users which are then used to take over their banking and cryptocurrency accounts to commit fraud according to BleepingComputer. Distributed through fake apps Security researchers at the cybersecurity firm ESET discovered that a fake Bolt Food application is currently being used to distribute ERMAC 2.0 in Poland. The malicious app impersonates the legitimate food delivery service but, fortunately, the fake site used by the cybercriminals behind this latest malware campaign has been taken down. Before it was taken down, links to the site were likely sent to potential victims through phishing emails, social media posts or by SMS. If a user did manage to download the fake app via the site, a permission request popped up when the app first opened asking them to give it full control of their device. With access to Android’s Accessibility Services, the fake app is able to serve application overlays that are used to steal login details from users who think they are inputting their credentials in Bolt Food’s legitimate app. ERMAC 2.0 supports an extensive list of apps While version 1.0 of ERMAC was capable of targeting 378 different applications including the apps of many popular banks, version 2.0 has bumped up the number of supported apps to 467. Going forward, we’ll likely see other campaigns impersonating popular apps in order to distribute ERMAC 2.0. According to a blog post from the threat intelligence company Cyble, ERMAC’s creators already have a number of overlays set up to steal user credentials from IDBI Bank, Santander, GreaterBank and Bitbank. One of the reasons that ERMAC 2.0 is so dangerous is due to the number of permissions it grants itself upon installation. With access to 43 different permissions, the malware is able to access your SMS messages, contacts, microphone and device storage. How to protect yourself from Android malware and banking trojans The simplest and easiest way to protect yourself and your devices from malware and banking trojans is not to install apps from unknown sources and use the Google Play Store, Amazon Appstore or the Samsung Galaxy Store. Although installing an app using an APK file can be fast and convenient, these installation files aren’t checked for malware and other threats which could lead to you falling victim to fraud or even worse, identity theft. At the same time, you should always be wary when granting permissions in Android. Not every app needs to access your camera, microphone or storage to function properly and cybercriminals often exploit Android’s Accessibility Services to give their fake apps more features.
PhishingBox Helps Solve the False Positive Dilemma Announcing the Release of Advanced Human Detection (AHD) Functionality Lexington, Kentucky, October 11, 2021: PhishingBox is excited to announce the addition of Advanced Human Detection (AHD) to the platform. This update will help security professionals save time by minimizing false positives caused by security and other anti-phishing software. AHD uses many data points to determine if a human performed the actions. Without AHD, the results of a simulated phishing test may be unreliable. Advanced Human Detection (AHD) will apply to all future phishing campaigns within the PhishingBox system. With AHD, two new action categories are included: verified and suspicious. Verified actions are confirmed as being performed by a human, using many data points to analyze. The suspicious status, which is the default for all incoming actions, means that the action cannot be verified through AHD or known IP addresses. In many cases, security software traverses message links to determine if a message is malicious. These actions, or false positives, overestimate the failure rate of a phishing simulation test. Before AHD, all actions that were not filtered through an IP or User-Agent were considered legitimate and counted as failures. Going forward, only actions that can be verified through AHD or verified IPs will be identified as a failure action for the test target. The official release date of AHD is Tuesday, October 12th, 2021. PhishingBox provides a comprehensive Security Awareness Ecosystem through a suite of tools and services to implement and maintain a high-level security awareness training program. Key components of this security awareness ecosystem include an Industry-leading phishing simulation tool, security awareness training, a learning management system (LMS), KillPhish reporting button, and our 'Phishing Inbox,' which allows InfoSec teams to thoroughly investigate reported emails and related information.
- THE EFFECT OF INTERNAL AUDIT ON FRAUD DETECTION AND PREVENTION ( A CASE STUDY OF POWER HOLDING COMPANY OF NIGERIA) - THE IMPACT OF INTERNAL AUDIT ON FRAUD DETECTION AND PREVENTION ( A CASE STUDY OF POWER HOLDING COMPANY OF NIGERIA) - THE EFFECT OF INTERNAL CONTROL AS A BASIC TOOL FOR FRAUD DETECTION AND PREVENTION - DETECTION AND PREVENTION OF FRAUD IN GOVERNMENT CORPORATION (A CASE STUDY OF PHCN) - THE EFFECT OF INTERNAL CONTROL AS A BASIC TOOL FOR FRAUD DETECTION AND PREVENTION (Case Study of Skye Bank Plc) - AN EVALUATION OF COMPUTER UTILIZATION IN RECORD MANAGEMENT IN NIGERIAN UNIVERSITY SYSTEM(A CASE STUDY OF UNIVERSITY OF ED0-EKITIT IN EKITI STATE - DESIGN AN IMPLEMENTATION OF A COMPUTER BASED SEAPORT BILLING SYSTEM (A CASE STUDY OF NIGERIAN PORT AUTHORITY LAGOS) - AN INVESTIGATION INTO THE PROBLEMS FACING TEACHERS IN TEACHING COMPUTER SCIENCE IN NIGERIAN SECONDARY SCHOOLS - FRAUD DETECTION AND PREVENTION - THE ROLE OF THE ACCOUNTING PROFESSION - THE ROLE OF AUDITORS IN THE DETECTION AND PREVENTION OF FRAUD IN SOME SELECTED BUSINESS ORGANIZATIONS THE ROLE OF COMPUTER IN FRAUD DETECTION AND PREVENTION (A CASE STUDY OF FIRST BANK NIGERIAN PLC) Fraud has been identified as the major cause of distress and unprogressive nature of most business organisation in the country. Even though, there are intelligent people who work round the clock to fault the system; but there are more intelligent people who build on fraud check devices hence the computer. It is the aim of this study to find out the role of computer in fraud detection and prevention in business organization (A Case study of First Bank of Nigeria Plc.) The work has five chapters, chapter one contain the body of the topic which included the problems studied, why the study was carried out and the definition of terms. An extensive review of related literature on computer as it relates computer crime, types, Computer fraud, Types, role of Computer in fraud detection and prevention are highlighted in chapter two. Chapter three dealt with the design of the study, the methods used in the distribution and collection of the questionnaires and the treatment of the data. The data got from the research survey were analyzed interpreted and the hypothesis tested in chapter four. Finally, the summary of findings, conclusion on the research and recommendation are all in chapter five. If Banks and other business organisations in the country would put the findings and recommendations to use, the incessant incidence of fraud will not only be detected and prevented but the customers will be satisfied with the services, which will bring a continuos increase in profit there by making the problem of the banks a thing of the past. TABLE OF CONTENT Title Page … … … … … … … … ii Certification … … … … … … … … iii Acknowledgement … … … … … … … iv Table of content … … … … … … … … v Abstract … … … … … … … … vi 1.1 Background of the study 1.2 Statement of problems 1.3 Research questions 1.4 Objective of the study 1.5 Significant of the study 1.6 Scope of the study 1.7 Research hypothesis 1.8 Definition of concepts 2.0 Literature review 2.1 Computer Definition 2.2 Classification of computer 2.3 Computer crime 2.4 Types of computer crime 2.5 Computer Fraud 2.6 Fraud and Banking industry 2.7 Nature of fraud 2.8 Types of Fraud 2.9 Role of computer in Fraud detection 2.10 Damages computer fraud can cause 2.11 Role of computer in fraud prevention 3.0 Research Methodology 3.1 research Design 3.2 Area of the study 3.4 Sampling size and sampling techniques 3.5 Method of data collection 3.6 Method of collection – data 3.7 Validation of instrument. 4.0 Presentation and analysis of data 4.1 Table and charts 4.2 Interpretation of tables and charts 4.3 Test of Hypothesis 4.3.1Summary of findings. 5.0 FINDINGS, RECOMMENDATION, CONCLUSION 5.1 Discussion of findings The computer machine and computer technologies are both human oriented. Though, the computer is brilliant and intelligent, it cannot compete with human. This is to say that the computer technology excellence of intelligence is desirable from the human elements that interact with the system and technology to make it work. It is whatever this human elements commends the computer to do that is does. Computer is absolutely harmless except it is imbued with the tendency to harm. Computer is basically defined as an electronic machine which accepts data form an input devices, perform arithmetic and logic operations in accordance with defined data and finally transfer the processed data to an output device through the central processing unit (CPU) either for further processing or to be produced as a final result. Computer technology has come to be accepted as an indispensable in novation if fraud detection and prevention in the banking industry. Fraud has for a long time now remained the cancer-warm that is biting hard on the back bone of banking industry. The effect of this phenomenon has led to the liquidation of many banks in Nigeria. It has been the major cause of the ugly development in our Banking industry now refereed to as ‘DISTRESS’ The development which records high occurrence of our banks has caused a big question mark on the credibility of Nigerian in both with and outside the country. The need to combat the double –headed monster by all means gave rise to the issue of cultiviatism of the banking industry especially in the areas susceptible to fraud. 1.1 BACKGROUND OF THE STUDY Since 1986 when structural adjustment programme was introduced, the banking business changed. As the number of branches grow form sixty five in 1985 to one hundred and fifty five in 1994, the techniques of delivering bank services changes. In November 1990, societe general Bank launched their first Acitomated teller machine (ATM) with trade name SGBN’S Aim’s cash point 24. The followed by first bank of Nigeria Plc in December 1991 with their ATM called “First cash” located in six branches in Lagos. The House journal of first Bank of Nigeria (1991) affirmed that prior to AIM, other forms of electronic banking existed notable among which were computers. The computerization of Enugu (Main) branches in April, 1989 and also the recent introduction of local Area network (LAWS) or the 26th of October, 1999 and the Reengineering projects which will usher in the wide Area net work (WAN) before the next millenium. But how far has the computer networking of these banks improved the long stay of customer on the counter; could manual application of interest on savings accounts, commission, on turnover and overdraft account handled automatically by the system? Can management always extract up to data information or account position of the operation at a particular point in time. The state of the banks services t customers are such that customers are made to stay helplessly on the counter for hours, only to be attended to by neatly dressed impolite bank cashier. The security checks are simply cumbersome and bank staff wishing to stay on their employment vigorously follow these procedures. Then bank manager and he staff seem not to bother as they profiled form chars queues generate. It created in them an area of self importance and a customer who is time conscious will have to see him/her in his/her company officer for behind the counter deal. However, there is no doubt that the fear of fraud slaved down bank services and has militated against performing any feast in the art of quick and excellent quality service. The experience of the banks is that any money that goes out by mistake does not come back and there are many people who are prepared to rob the bank through fraud. But computer has helped a lot in the detection and it prevention in banking industry. Based on personal experience and views of others on the role of computer in fraud detection and prevention in banking industry in Nigeria, the researcher was motivated to evaluate the role of computer in fraud detection and prevention in banking services. A case study of first bank of Nigeria Plc Enugu main branch. 1.2 STATEMENT OF PROBLEMS It has been observed that many people has been manipulating money and embezzlement of fund in Business organization as in first banks of Nigeria Plc Enugu branch. Observation also shows that people prefer manipulated money to their main salary which lead to the liquidation of the organization . Handling such until the inversion of computer. What then are the role of computer in fraud detection and prevention in business organization ? This fundamental question underlines the problems of this study. This research work seeks to identify the major problems confronting the banking system and its is distributed towards the following statement of problem. 1. One major problem in the banking industry is the rate of financial fraud which is evidenced by poor accounting recording and poor remuneration. 2. Another problems could be to determine the consequences of financial embezzlement in the banking industry which could lead to poor profit base, retrenchment and the bank being distressed. 3. The inability of then to ascertain who actually are the fraudsters which include the officials of the banks, customers and the society at large. 1.3 RESEARCH QUESTIONS To guide the study, the following research questions were raised. 1. What is the role of computer on fraud detection ? 2. How can computer be used to prevent fraud ? 3. What are the causes of fraud in business organization ? 4. What are the causes if fraud in business organisation ? 5. What are the consequences of fraud in business organization ? 1.4 OBJECTIVES OF STUDY The objectives of this study are as follows 1. To find out the role f computer in fraud detection. 2. To determine how computer can be used to prevent fraud. 3. To identify the causes of fraud. 4. Suggesting measures of preventing fraud. 5. To ascertain the consequences of fraud in business organization. 6. To make recommendation based on the findings. 1.5 SIGNIFICANCE OF THE STUDY. The significance of this study lies on the great importance attached to the computer in business organization from the findings of this research, people will know the importance of computer in fraud detection and thus prevention. This work is significant in the sense that it will be an eye opener to the pubic at large. Another significance is that people will know that there is no means of manipulation of fraud where computer is installed. Interested organization, business centers, institution and many others having know the role of computer will make use of computer in their organization for them to curb this fast growing problem will benefit from the study. It will help the organization to know when their money is embezzled. 1.6 SCOPE OF THE STUDY This study covers a period of five years, that is from 2003 backwards. This period was chosen because the problem of fraud in banking appears to be more prevalent in the this period. First Bank Plc Enugu of Enugu State has been chosen as the study area. 1.7 RESEARCH HYPOTHESIS NULL HYPOTHESIS (HO) HOI: There is no significant difference on the views of senior and junior staff of first bank of Nigeria, Okpara Avenue on the extent of improved customers services because of computer installation. HO2: There is no significant difference between the staff on the extent to which management information system is adequate without computer. HO3:There is no significant difference between the staff on the extent to which computer installation has helped in the achievement of the objectives. HO4:There is no significant difference in the views of Senior and Junior staff of first bank of Nigerian Plc Okpara Avenue on the adequacy of human and non-human resources for implementation of computer installation. 1.8 DEFINITION OF CONCEPTS. These are the definitions of terms used in the study. COMPUTER: Computer is an electronic device with in built programs which enable it to receive, store, process and procedure large amounts of data for the execution of a more range of arithmetic and logical operation. CRIME: Crime is any act of commission or omission believed to be socially harmful to a group and thus forbidden by the designated authority of that group under threat of punishment states another way or a specific element or act of human behaviour which vary in time and place which is considered repayment or harmful enough to be forbidden by the group and this subject to a penalty. FRAUD: Criminal deception or act of manipulation of fund by criminal act. PREVENTION: Serving or designed to prevent, precautionary. THIEF: Person who steab, especially secretly and without violence
Most modern EDR solutions use behavioral detection, allowing to detect malware based on how it behaves instead of solely using static indicators of compromise (IoC) like file hashes or domain names. In this post, I give a VBA implementation of two techniques allowing to spoof both the parent process and the command line arguments of a newly created process. This implementation allows crafting stealthier Office macros, making a process spawned by a macro look like it has been created by another program such as explorer.exe and has benign-looking command line arguments. I am not the author of these techniques. CreditsContinue reading… Building an Office macro to spoof parent processes and command line arguments Like every year, the Swiss security event Insomni’hack releases a “CTF teaser” two months prior the real CTF. This post is a write-up for three of the challenges: Vulnshop, Smart-Y, and Hax4Bitcoins. Unfortunately I learned about this CTF a bit late, so I didn’t get much time to play on it. Cloudflare is a service that acts as a middleman between a website and its end users, protecting it from various attacks. Unfortunately, those websites are often poorly configured, allowing an attacker to entirely bypass Cloudflare and run DDoS attacks or exploit web-based vulnerabilities that would otherwise be blocked. This post demonstrates the weakness and introduces CloudFlair, an automated detection tool. I recently worked on a small toy project to execute untrusted Python code in Docker containers. This lead me to test several online code execution engines to see how they reacted to various attacks. While doing so, I found several interesting vulnerabilities in the code execution engine developed by Qualified, which is quite widely used including by websites like CodeWars or InterviewCake. The combination of being able to run code with network access and the fact that the infrastructure was running in Amazon Web Services lead to an interesting set of vulnerabilities which we present in this post. This post is a walkthrough of the VulnHub machine SickOs 1.2. I previously wrote one for its little sister, SickOs 1.1. I found this second version to be more challenging, but also more realistic; the author tried to mimic what one could encounter during a real engagement – and it does it pretty well. In this post we will set up a virtual lab for malware analysis. We’ll create an isolated virtual network separated from the host OS and from the Internet, in which we’ll setup two victim virtual machines (Ubuntu and Windows 7) as well as an analysis server to mimic common Internet services like HTTP or DNS. Then, we’ll be able to log and analyze the network communications of any Linux or Windows malware, which will unknowingly connect to our server instead of the Internet. We demonstrate the setup with a real life use case where we analyze the traffic of the infamous TeslaCrypt ransomware, a now defunct ransomware which infected a large number of systemsContinue reading… Set up your own malware analysis lab with VirtualBox, INetSim and Burp In this post I’ll talk about how I managed to exploit the SickOs 1.1 VM made by D4rk36. The fact that the author mentions it is very similar to the OSCP labs caught my eye since I’m seriously thinking about taking this certification in a few months. 🙂 Let’s get started! I managed to find the time to play on a new vulnerable VM. This time, it will be Vulnix and will mainly be around exploiting vulnerable NFS shares. The VM was overall quite simple, but still learned me several things about NFS and how it plays with remote permissions. It’s been a few months since I wrote my last write-up on a VulnHub vulnerable machine. Time for a new one! The VM is called Mr Robot and is themed after the TV show of the same name. It contains 3 flags to find, each of increasing difficulty. I recently started gaining a lot of interest in security, and after reading several CTF write-ups, I decided to try to solve one by myself. I chose Droopy v0.2. In case you don’t know, the goal of a CTF is very simple: Capture The Flag! Most of the time, the flag is simply a text file that you can obtain after having gained root access on the machine. You are only provided with a virtual machine, and the rest is up to you. Let’s get started!
This is the third post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here. In our BlackHat talk, “Fighting the Previous War “, we showed how attacks against cloud services and cloud-native companies are still in their nascent stages of evolution. The number of known attacks against AWS is small, which is at odds with the huge number (and complexity) of services available. It’s not a deep insight to argue that the number of classes of cloud specific attacks will rise. However, the “previous war” doesn’t just refer to cloud stuff. While our talk primarily dealt with cloud services, we also spent some time on another recent development, Google’s BeyondCorp. In the end, the results weren’t exciting enough to include fully in the talk and so we cut slides from the presentation, but the original slides are in the PDF linked above. In this post we’ll provide our view on what BeyondCorp-like infrastructure means for attackers, and how it’ll affect their approaches. What is BeyondCorp? We start with a quick overview of BeyondCorp that strips out less important details (Google has a bunch of excellent BeyondCorp resources if you’ve never encountered it before.) In an ossified corporate network, devices inside the perimeter are more trusted than devices outside the perimeter (e.g. they can access internal services which are not available to the public Internet). In addition, devices trying to access those service aren’t subject to checks on the device (such as whether the device is known, or is fully patched). In the aftermath of the 2009 Aurora attacks on Google, where attackers had access to internal systems once the boundary perimeter was breached, Google decided to implement a type of Zero Trust network architecture. The essence of the new architecture was that no trust was placed in the location of a client regardless of whether the client was located inside a Google campus or sitting at a Starbucks wifi. They called it BeyondCorp. Under BeyondCorp, all devices are registered with Google beforehand and all access to services is brokered through a single Access Proxy called ÜberProxy. This means that all Google’s corporate applications can be accessed from any Internet-connected network, provided the device is known to Google and the user has the correct credentials (including MFA, if enabled.) Let’s walk through a quick example. Juliette is a Google engineer sitting in a StarBucks leaching their Wifi, and wants to review a bug report on her laptop. From their documentation, it works something like this (we’re glossing over a bunch of details): - Juliette’s laptop has a client certificate previously issued to her machine. - She opens https://tickets.corp.google.com in her browser. - The DNS response is a CNAME pointing to uberproxy.l.google.com (this is the Access Proxy). The hostname identifies the application. - Her browser connects using HTTPS to uberproxy.l.google.com, and provides its client certificate. This identifies her device. - She’s prompted for credentials if needed (there’s an SSO subsystem to handle this). This identifies her user. - The proxy passes the application name, device identifier (taken from the client certificate), and credentials to the Access Control Engine (ACE). - The ACE performs an authorization check to see whether the user is allowed to access the requested application from that device. - The ACE has access to device inventory systems, and so can reason about device trust indicators such as: - a device’s patch level - its trusted boot status - when it was last scanned for security issues - whether the user has logged in from this device previously - If the ACE passes all checks, the access proxy allows the request to pass to the corporate application, otherwise the request fails. Google’s architecture diagrams include more components than we’ve mentioned above (and the architecture changed between their first and most recent papers on BeyondCorp). But the essence is a proxy that can reason about device status and user trust. Note that it’s determining whether a user may access a given application, not what they do within those applications. One particularly interesting aspect of BeyondCorp is how Google supports a bunch of protocols (including RDP and SSH) through the same proxy, but we won’t look at that today. (Another interesting aspect is that Google managed to migrate their network architecture without interruption and is, perhaps, the biggest takeaway from their series of papers. It’s an amazingly well planned migration.) This sucks! (For attackers) For ne’er-do-wells, this model changes how they go about their business. Firstly, tying authorisation decisions to devices has a big limiting effect on credential phishing. A set of credentials is useless to an external attacker if the authorisation decision includes an assertion that the device has previously been used by this user. Impersonation attacks like this become much more personal, as they require device access in addition to credentials. Secondly, even if a beachhead is established on an employee’s machine, there’s no flat network to laterally move across. All the attacker can see are the applications for which the victim account had been granted access. So application-level attacks become paramount in order to laterally move across accounts (and then services). Thirdly, access is fleeting. The BeyondCorp model actively incorporates updated threat information, so that (for example), particular browser versions can be banned en masse if 0days are known to be floating around. Fourthly, persistence on end user devices is much harder. Google use verified boot on some of their devices, and BeyondCorp can take this into account. On verified boot devices, persistence is unlikely to take the form of BIOS or OS-level functionality (these are costly attacks with step changes across the fleet after discovery, making them poor candidates). Instead, higher level client-side attacks seem more likely Fifthly, in addition to application attacks, bugs in the Access Control Engine or mistakes in the policies come into play, but these must be attacked blind as there is no local version to deploy or examine. Lastly, targeting becomes really important. It’s not enough to spam random @target.com addresses with dancingpigs.exe, and focus once inside the network. There is no “inside the network”, at best you access someone’s laptop, and can hit the same BeyondCorp apps as your victim. A quick look at targeting The lack of a perimeter is the defining characteristic of BeyondCorp, but that means anyone outside Google has a similar view to anyone inside Google, at least for the initial bits needed to bootstrap a connection. We know all services are accessed through the ÜberProxy. In addition, every application gets a unique CNAME (in a few domains we’ve seen, like corp.google.com, and googleplex.com). DNS enumeration is a well-mapped and frequently-trod path, and effective at discovering corporate BeyondCorp applications. Pick a DNS enumeration tool (like subrute ), run it across the corp.google.com subdomain, and get 765 hostnames. Each maps to a Google Corporate application. Here’s a snippet from the output: But DNS isn’t the only place to identify BeyondCorp sites. As is the fashion these days, Google is quite particular about publishing new TLS certificates in the Certificate Transparency logs. These include a bunch of hostnames in corp.google.com and googleplex.com. From these more BeyondCorp applications were discovered. Lastly, we scraped the websites of all the hostnames found to that point and found additional hostnames referenced in some of the pages and redirects. For fun, we piped into PhantomJS and screencapped all the sites for quick review. Results? We don’t need no stinking results! The end result of this little project was a few thousand screencaps of login screens: |Quite a few of these |Error showing my device isn’t allowed access to this service |Occasional straight 403 |So, so many of these Results were not exciting. The only site that was open to the Internet was a Cafe booking site on one of Google’s campuses. However, a few weeks ago a high school student posted the story of his bug bounty which appeared to involve an ÜberProxy misconfiguration. The BeyondCorp model explicitly centralises security and funnels traffic through proxy chokepoints to ease authN and authZ decisions. Like any centralisation, it brings savings but there is also the risk of a single issue affecting all applications behind the proxy. The takeaway is that mistakes can (and will) happen. So where does this leave attackers? By no means is this the death of remote attacks, but it shifts focus from basic phishing attacks and will force attackers into more sophisticated plays. These will include more narrow targeting (of the BeyondCorp infrastructure in particular, or of specific endusers with the required application access), and change how persistence on endpoints is achieved. Application persistence increases in importance, as endpoint access becomes more fleeting. With all this said, it’s unlikely an attacker will encounter a BeyondCorp environment in the near future, unless they’re targeting Google. There are a handful of commercial solutions which claim BeyondCorp-like functionality, but none rise to the same thoroughness of Google’s approach. For now, these BeyondCorp attack patterns remain untested.
It is possible to develop software source code, called underhanded code, that appears benign to human review but is actually malicious. This is not merely an academic concern; in 2003, an attacker attempted to subvert the widely used Linux kernel by inserting underhanded software. This paper provides a very brief initial look at underhanded source code, with the intent to eventually help develop countermeasures against it. This paper identifies and summarizes public examples of underhanded code, briefly summarizes the literature, and identifies promising countermeasures. It then examines one data set (the Obfuscated V Contest), tries a small set of countermeasures, and measures their effectiveness. This initial work suggests that a small set of countermeasures can significantly reduce the risks from underhanded code. The paper concludes with recommendations on how to expand on this work.
Why can my device not be accessed through the IP address? Sometimes you might try to log into your device through the IP address on a computer but it gives you an error (ie. “Cannot be reached”, “Cannot be found”). This can mean a few things: - the device is not connected to the network - the computer is not connected to the network - the device and network are not connected to the same network as each other - need to add http:// to the beginning of the URL (example: http://192.168.0.100) If you are getting that error you can check to make sure all physical network connections are secure (network cables). Also check to make sure that the device and computer are in the same network because if they are not then the two will not be able to communicate with each other.
Improving Security of Lightweight Authentication Technique for Heterogeneous Wireless Sensor Networks - 307 Downloads One of the great application of user authentication and key agreement protocol is to access sensor information securely over the insecure networks. Recently, Kalra and Sood proposed an efficient smart card based authentication protocol to exchange confidential information securely between the user and sensor node. We review the security of Kalra and Sood scheme and observe that their scheme is vulnerable to stolen smart card attack, stolen verifier attack and impersonation attack. The entire analysis indicates that there is a need of secure and user-friendly authentication mechanism for wireless sensor networks (WSNs). To ensure secure communication in WSNs, we improve the security of the authentication mechanism for WSNs. The main intention of this paper is to confiscate the mentioned security attacks by proposing an efficient authentication protocol using smart card. To validate security attributes, we have used well-popular AVISPA simulation tool whose results shows that the proposed protocol is SAFE under OFMC and CL-AtSe models. Further, the performance analysis shows its efficiency. KeywordsWireless sensor networks Authentication Security - 1.Amin, R., & Biswas, G. P. (2016). A secure light weight scheme for user authentication and key agreement in multi-gateway based wireless sensor networks. Ad Hoc Networks, 36, 58–80. Google Scholar - 2.Armando, A., Basin, D., Boichut, Y., Chevalier, Y., Compagna, L., Cuéllar, J., Drielsma, P. H., Héam, P.-C., Kouchnarenko, O., & Mantovani, J., et al. (2005). The avispa tool for the automated validation of internet security protocols and applications. In Computer Aided Verification. Springer, pp. 281–285.Google Scholar - 3.Automated Validation of Internet Security Protocols and Applications. http://www.avispa-project.org/. Accessed on January 2014. - 9.Eisenbarth, T. Kasper, T., Moradi, A., Paar, C., Salmasizadeh, M., & Shalmani, M. T. M. (2008). On the power of power analysis in the real world: A complete break of the keeloq code hopping scheme. In Advances in Cryptology-CRYPTO. Springer, 2008, pp. 203–220.Google Scholar - 10.He, D., Kumar, N., & Chilamkurti, N. (2015). A secure temporal-credential-based mutual authentication and key agreement scheme with pseudo identity for wireless sensor networks. Information Sciences 321, 263–277. doi:10.1016/j.ins.2015.02.010. http://www.sciencedirect.com/science/article/pii/S0020025515001012. - 11.He, D., Gao, Y., Chan, S., Chen, C., & Bu, J. (2010). An enhanced two-factor user authentication scheme in wireless sensor networks. Ad Hoc and Sensor Wireless Networks, 10(4), 361–371.Google Scholar - 17.Kocher, K., Jaffe, J., & Jun, B. (1999). Differential power analysis. In Advances in Cryptology CRYPTO’99. Springer, pp. 388–397.Google Scholar - 18.Maitra, T., Amin, R., Giri, D., & Srivastava, P. D. (2016). An efficient and robust user authentication scheme for hierarchical wireless sensor networks without tamper-proof smart card. International Journal of Network Security, 18(1), 553–564.Google Scholar - 27.Wander, A. S., Gura, N., Eberle, H., Gupta, V., & Shantz, S. C. (2005). Energy analysis of public-key cryptography for wireless sensor networks. In Third IEEE international conference on pervasive computing and communications, 2005. PerCom 2005, IEEE, pp. 324–328.Google Scholar - 28.Watro, R., Kong, D., Cuti, S. -F., Gardiner, C., Lynn, C., & Kruus, P. (2004). Tinypk: Securing sensor networks with public key technology. In Proceedings of the 2nd ACM workshop on security of ad hoc and sensor networks, ACM, pp. 59–64.Google Scholar - 29.Wong, K. H., Zheng, Y., Cao, J., & Wang, S. (2006). A dynamic user authentication scheme for wireless sensor networks. In IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, 2006, IEEE, Vol. 1, p. 8.Google Scholar
Control of applications Fortinet FortiGate Application Control. Your primasecure.com can detect and intercept network traffic based on the application generating the traffic using the application control Security Profile feature. Application control uses FortiGate Intrusion Protection protocol decoders to log and manage the behaviour of application traffic passing through FortiGate. Even if traffic uses non-standard ports or protocols, application control uses IPS protocol decoders to detect application traffic. Numerous applications can be recognized by the FortiGate unit. Application control sensors can be added to firewall regulations that control the traffic of the applications you need to monitor and the network on which they run. By continually adding applications to the FortiGuard Application Control Database, Fortinet is constantly expanding the list of applications it can detect with application control. The FortiGuard Intrusion Protection System Database and the application control database both share the same version number since intrusion protection protocol decoders are used for application control. By going to the License Information dashboard widget and finding the IPS Definitions version, you can find out which version of the application control database is installed on your unit. To see all the applications FortiGuard supports, go to the FortiGuard Application Control List. All supported applications are listed on this web page. The details of any application can be viewed by selecting its name. Concepts of application control Network traffic can be controlled by its source or destination address, port, quantity, or similar attributes in the security policy. It may not be sufficient to precisely define traffic flow from a specific application using these methods. An application control feature addresses this issue by examining the traffic itself for unique signatures. No server addresses or ports are required for application control. Over 1000 applications, services, and protocols are supported by FortiGate. Basic applications are automatically allowed The alternative to listing each specific traffic individually is to block applications by category. Despite the fact that listing the applications individually gives a great deal of granularity it does tend to allow for missing some of them. Blocking traffic by category, however, has the disadvantage of blocking some traffic that was not intended to be blocked. Default permissions may be appropriate for a number of basic applications. DNS, for instance. Your web browsing would be blocked if you blocked the category Network Services, unless your users are part of a very small group that uses IP addresses instead of URLs to browse the web. In the absence of DNS, URLs cannot be resolved into IP addresses. Using the FortiGate’s CLI, the following traffic types can be automatically allowed, regardless of whether their category is blocked: - Domain Name System - Internet Control Protocol - Web browsing via HTTP generically - Communication over SSL in general Applications for instant messaging Some IM applications do not have the Application Control function in the Web Based Manager. Instead, they are handled by the CLI of FortiGate. The following applications are available: Application access is controlled by allowing or denying users access. IM accounts can be configured to enable or disable unknown users. The application determines whether or not the user should be added to a blacklist or whitelist based on a global policy. In the CLI Reference guide, under the heading of imp2p, you can find details about how to configure these settings.
A virtual data place is basically an internet database of information which is used primarily for the purpose of the storage area and syndication of data related documents. Most of the time, a electronic data room would be utilized to facilitate the due diligence period of a M&A deal, private equity and Investment capital transaction, or maybe a loan submission. It also could be used like a secondary data center to your business, to house servers, your Intranet, or any other applications that are critical to your business. In this contemporary setting, you cannot find any question why these rooms happen to be incredibly useful tools. However , a person must be mindful not to employ these rooms for whatever other than their very own intended objectives, as if carried out incorrectly, these data areas could get firewalls to your business. A lot of the applications which is run in these virtual info rooms will be encrypted; for this reason it is vital that the network comes with a encryption program in place. This does not have to be a full blown protected network, but rather a simple method will be sufficient. Any incoming or sociable traffic should first always be encrypted and redirected through several layers of protection before it is stored on your own local machine or on the remote hardware. In addition to this, it is actually vitally important there are redundancy methods in place just in case something occurs a router or a server, so that your company’s network is normally not crippled by the loss in one terminowo.com of these vital tools. As you can see, there are a number of different reasons that these virtual data rooms present such wonderful features to businesses. However , just like any scientific solution, these types of rooms could also have their disadvantages. As mentioned above, the protocol part, which acts as an security layer, is going to be susceptible to encounter from in the garden sources, and the data centers themselves may be attacked by hackers or perhaps malicious attackers. However , these issues are not likely to cause important damage to your business. As long as the network themselves is secure and kept clean, and the physical machines utilized are maintained under fasten and key element, then there is certainly little to worry about.
Signing code provides a method to both verify that your code has not been altered or corrupted as well as authenticating the code is from a known source. In the past is was common to use a CRC or simple checksum, but these methods at most will only check for code corruption. A cryptographic hash is a mathematical function that takes a file of any size and returns a unique fixed length string, which is commonly referred as the hash value or digest. The hash function replaces the checksum or CRC to verify the code. The 256-bit SHA-2 (Secure Hash Algorithm-2) or SHA-256, is used to generate the digest from the application bundle. The application bundle includes a header, a CM0+ project binary, and optionally a CM4 project binary depending on the application. The digital signature is then added to the end of the application bundle. Modus Toolbox takes care of calculating the digest, creating the digital signature and appending it to the application bundle. It is important to note that only the digest is encrypted not the code binary itself. The RSA keys used to encrypt the digest is what is known as an asymmetric key. This means that the key has two parts, a public key and a private key. The private key is used to encrypt the digest and should be stored as securely as possible. If anyone were to see the private key, then they could sign code for your product as well and the device would not know the difference. The other half of the asymmetric key is known as the “Public Key”. This public key can be exposed to the public without harm. The only issue is that the public key must be either authenticated, or ensured that it hasn’t changed. When the PSoC™ 6 was moved to the Secure lifecycle stage, a hash was created that included the OEM public key and placed into immutable eFuse. When the PSoC™ 6 boots in the Secure lifecycle stage, it calculates the digest of the application bundle, not including the digital signature, just as the build process did using the SHA-256 function. The digital signature is then decrypted using the OEM public key. The calculated digest and decrypted digest are then compared to authenticate the OEM application code. If the digests match, the application code is executed. If the digests don’t match, the PSoC™ 6 enters the Dead protection state. See the diagram below for the code authentication flow. This flow is only valid for the first OEM code that is executed. In many applications, the first code is a bootloader supplied by the OEM. It is recommended to have this bootloader to authenticated the code it boots with a similar system just discussed. Infineon supplies a common open source bootloader called MCUBoot. It supports code authentication, but uses ECC asymmetric keys to authenticate. This process is almost identical to the way the PSoC™ 6 boot process works. Since the ECC public key is part of the MCUBoot bootloader, the key has already been validated and therefor the chain of trust is complete from the PSoC™ 6 ROM code to the OEM application code. You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in. By technically required cookies we mean cookies without those the technical provision of the online service cannot be ensured. These include e.g. cookies supporting essential services like a smooth reproduction of video or audio footage. So called ‘functional cookies’ are also assigned belonging to this category. Functional cookies store information in order to provide you comfortable use of our online services (e.g. language selection). The legal basis for the processing of personal data by means of cookies of this category is Infineon’s legitimate interest. This includes, among other things, the interest in having a professional external presentation as well as an optimal balancing of the loads on the server due to technical reasons. By performance and marketing cookies we mean cookies which are technically not required. We use performance and marketing cookies only if you have given us your prior consent. With such cookies, we collect information about how users interact with our website and which pages have been visited. This helps us to understand user activity on our website on an aggregated as well as on a personal level to provide you relevant content and services.
By By Brian L. Goodman and Robert M. LeeNovember 27, 2015 8:15:50The job of a security researcher is to find bugs in a piece of software or an application and then exploit those bugs. The goal of a malware researcher is not just to create new malware that could be used to break into a company, but to make a new piece of malware that would be used in future attacks. This new malware is the next step in the evolution of malware and its new purpose. Malware research and development is an increasingly lucrative job, with salaries exceeding $100,000 per year in the United States and other countries. That said, the amount of money a researcher makes depends on many factors. Some of those factors are technical skills, such as how well the researcher can write and analyze software, as well as the nature of the problem being solved. Other factors include the size of the research team, how well they know the underlying technology and how quickly they can adapt to changes in the industry. To be successful, researchers must be able to exploit vulnerabilities in the software, and the vulnerabilities need to be present for them to be exploited. That’s where cybersecurity professionals come in. In this article, we look at the differences between a cybersecurity professional and a malware hacker. We’ll focus on how cybersecurity professionals can be classified, as these are the types of professionals who find bugs and create malware. This will give us a better understanding of the role of cybersecurity professionals in the future. First, let’s look at what cybersecurity professionals are and how they work. The term cybersecurity professional is used loosely. It doesn’t refer to the same job description as cyber security specialists. Cybersecurity professionals typically focus on software security and malware security, but not all of them have the same skillset. The job description includes things like:Using the tools that cyber security professionals have access to, they work to discover vulnerabilities in software and exploit them. They also create malware that can be used for other purposes. The key to cybersecurity professionals’ success is the combination of the skills they have, the technology they have access and the knowledge they have about the software they are working on. Most cybersecurity professionals have a bachelor’s degree or higher. Cyber security professionals can get their bachelor’s degrees in computer science or engineering or any other field. Cyber professionals have also earned a master’s degree in computer or information security. They typically work as contractors, security consultants or as software engineers. There are a few categories of cybersecurity professional. Some specialize in a particular area of software security or malware security. These types of cybersecurity experts specialize in areas such as threat detection, penetration testing, vulnerability exploitation and network security. The cybersecurity profession also includes people who specialize in other fields such as cybersecurity administration or computer forensics. Some cybersecurity professionals specialize in penetration testing. Other cybersecurity professionals work as researchers. There are also some cybersecurity professionals who specialize primarily in software security. For example, a computer forensic analyst is a forensics specialist who works to find and identify software vulnerabilities in a system. These experts use tools such as a vulnerability scanner and other techniques to find software vulnerabilities. They then use those vulnerabilities to discover new vulnerabilities in their systems and use them in future software attacks. A cybersecurity professional’s job can be divided into three different parts: the software development, the penetration testing and the malware research. The software development portion of a cybersecurity profession involves creating a security solution that is used by an organization to protect data and files from cyberattacks. This includes writing a patch or update to the software or modifying the software to make it more secure. This is a job that many cybersecurity professionals get. Some organizations may require that the security solution be open source and free to use. Cyber threats are real, and software solutions must be capable of protecting themselves against them. The threat is real and the need is real, but the technology that the company has developed to deal with the threat is a good bet to be successful. To make a patch to an existing software solution, the cyber security professional will typically write an application that runs on the company’s platform. This application will use the patch to patch the system and it will then run on the target system. The second part of a cyber security career is the penetration test. This involves testing and finding out if a company’s security systems are vulnerable. Cyber threat detection is the process of finding out whether a particular piece of information, such a user name or password, is being stored on a company system or other systems. It is used to detect and identify new threats. This part of the cybersecurity career involves testing systems to determine if they are vulnerable to cyber attacks and to help the company respond to those attacks. This also involves testing the system to determine whether a user account on the system is being used for malicious purposes. This is often done by monitoring the use of specific email addresses and other sensitive information by employees. This type of testing is called vulnerability testing. The last part of an
The Red Alert ransomware was taken public on July 5, 2022 via twitter by MalwareHunterTeam. According to the ransomware’s own website, as of this date, Red Alert had only a single company on its victim list. The group behind the attacks has used two designations in its operations, Red Alert in its attacks and in its ransom notes, but also ”N13V” internally. The malware targets VMWare ESXi virtual servers, both Linux and Windows. Red Alert is designed to be used by command line, allowing the threat author to shut down any virtual machines that are active. The Red Alert ransomware is then able to encrypt the files corresponding to the virtual machines, such as .vmdk disks, SWAP files, blogs and others. After the encryption process, the ransomware generates a .txt file named ”HOW_TO_RESTORE” with the details about the ransomware procedure. In this document is mentioned the name ”Red Alert”, the ransom amount as well as a link for payment in MONERO cryptocurrency, which is the only currency accepted for the ransom. The group has been attacking companies, practicing double extortion. This means that before encrypting the data, the Red Alert ransomware is able to steal information about the virtual machine. This practice is widely used by hackers, allowing the authors of the threat to demand a ransom not only to acquire the decryption key, but also to prevent the release of the stolen data. The Red Alert ransomware is a new malware with few executed attacks, so it is expected that in the coming days the group will continue to attack more and more companies around the world. Digital Recovery has developed a solution that can recover encrypted files without the need to contact the criminals to obtain the decryption key. This solution was developed in-house and makes the recovery of data encrypted by ransomware possible. For more than 23 years, Digital Recovery has been operating in the market of data recovery in various storage devices, such as Storages, Databases, Virtual Machines, RAID systems, Servers and others. Secrecy and security are part of our daily vocabulary. For this reason we base all our solutions on the General Data Protection Regulation (GDPR). We also provide our customers with a confidentiality agreement (NDA) Thanks to our technology, our services can be performed remotely, quickly and securely. Contact one of our specialists and have your encrypted data recovered immediately.
What is Win32/Agent.UHC infection? In this short article you will certainly discover concerning the interpretation of Win32/Agent.UHC and also its unfavorable impact on your computer system. Such ransomware are a type of malware that is clarified by on-line scams to require paying the ransom money by a sufferer. It is better to prevent, than repair and repent! Subscribe to our Telegram channel to be the first to know about news and our exclusive materials on information security. In the majority of the cases, Win32/Agent.UHC virus will certainly instruct its targets to launch funds move for the function of reducing the effects of the amendments that the Trojan infection has introduced to the sufferer’s device. These alterations can be as adheres to: - Attempts to connect to a dead IP:Port (1 unique times); - Reads data out of its own binary image; - Unconventionial language used in binary resources: Spanish (Modern); - The binary likely contains encrypted or compressed data.; - The executable is compressed using UPX; - Checks for the presence of known windows from debuggers and forensic tools; - Installs itself for autorun at Windows startup; - Creates a copy of itself; - Ciphering the files situated on the sufferer’s hard disk — so the target can no longer utilize the information; - Preventing routine access to the target’s workstation; One of the most normal networks where Win32/Agent.UHC Ransomware are injected are: - By methods of phishing emails; - As a repercussion of customer winding up on a source that holds a destructive software application; As quickly as the Trojan is effectively injected, it will certainly either cipher the data on the target’s computer or prevent the device from operating in a correct fashion – while likewise putting a ransom money note that mentions the demand for the victims to effect the repayment for the function of decrypting the documents or bring back the data system back to the initial condition. In many instances, the ransom note will turn up when the customer reboots the COMPUTER after the system has already been harmed. Win32/Agent.UHC circulation networks. In numerous edges of the globe, Win32/Agent.UHC grows by leaps as well as bounds. Nevertheless, the ransom money notes as well as techniques of obtaining the ransom amount might vary relying on specific neighborhood (regional) setups. The ransom notes as well as tricks of extorting the ransom amount may vary depending on specific local (regional) settings. Faulty informs regarding unlicensed software program. In specific areas, the Trojans frequently wrongfully report having identified some unlicensed applications enabled on the sufferer’s gadget. The sharp then requires the customer to pay the ransom money. Faulty declarations concerning unlawful web content. In countries where software application piracy is much less prominent, this approach is not as reliable for the cyber frauds. Additionally, the Win32/Agent.UHC popup alert may wrongly claim to be deriving from a law enforcement establishment and will report having located youngster pornography or various other unlawful information on the tool. Win32/Agent.UHC popup alert might falsely assert to be deriving from a regulation enforcement establishment and will report having located child pornography or various other illegal information on the device. The alert will similarly have a need for the individual to pay the ransom money. File Info:crc32: B03881EBmd5: 3888d0549631b869513ab7ce8da2a405name: 3888D0549631B869513AB7CE8DA2A405.mlwsha1: fbb69ca70d03843d0c82f76e4746015f26291731sha256: 19162060299108324b306eb8ed8249ec8d550823b17d7404057ec34a79a89d98sha512: e373670dfb88bfba5353475c9ce79b88ba972b7fc94c270e077f2c5a6931b2165993fe0a6a4f021b541225db67bffda2724487a102079601d511f40e6abe8f43ssdeep: 1536:vagGZYOnq4FzEXUy3wrTeYq5+4qqAgLLRirATaUtVB+WLO+idYlL:vtirqWan3wnbo+4qqAgLLRirCaUtVB+type: PE32 executable (console) Intel 80386, for MS Windows, UPX compressed Version Info:0: [No Data] Win32/Agent.UHC also known as: |K7AntiVirus||Trojan ( 00511e081 )| |Elastic||malicious (high confidence)| |Cynet||Malicious (score: 99)| |K7GW||Trojan ( 00511e081 )| |ESET-NOD32||a variant of Win32/Agent.UHC| |SentinelOne||Static AI – Suspicious PE| |MAX||malware (ai score=81)| How to remove Win32/Agent.UHC ransomware? Unwanted application has ofter come with other viruses and spyware. This threats can steal account credentials, or crypt your documents for ransom. Reasons why I would recommend GridinSoft1 There is no better way to recognize, remove and prevent PC threats than to use an anti-malware software from GridinSoft2. Download GridinSoft Anti-Malware. You can download GridinSoft Anti-Malware by clicking the button below: Run the setup file. When setup file has finished downloading, double-click on the setup-antimalware-fix.exe file to install GridinSoft Anti-Malware on your system. An User Account Control asking you about to allow GridinSoft Anti-Malware to make changes to your device. So, you should click “Yes” to continue with the installation. Press “Install” button. Once installed, Anti-Malware will automatically run. Wait for the Anti-Malware scan to complete. GridinSoft Anti-Malware will automatically start scanning your system for Win32/Agent.UHC files and other malicious programs. This process can take a 20-30 minutes, so I suggest you periodically check on the status of the scan process. Click on “Clean Now”. When the scan has finished, you will see the list of infections that GridinSoft Anti-Malware has detected. To remove them click on the “Clean Now” button in right corner. Are Your Protected? GridinSoft Anti-Malware will scan and clean your PC for free in the trial period. The free version offer real-time protection for first 2 days. If you want to be fully protected at all times – I can recommended you to purchase a full version: If the guide doesn’t help you to remove Win32/Agent.UHC you can always ask me in the comments for getting help. User Review( votes)
Longitudinal study of large-scale traceroute results Rohrer, Justin P. MetadataShow full item record Traceroute is a popular active probing technique used by researchers, operators, and adversaries to map the structure and connectivity of IP networks. However, traceroute is susceptible to making inaccurate inferences. We perform a large-scale longitudinal investigation of traceroute artifacts to find anomalies that may be indicative of network errors, misconfiguration, or active deception efforts. Using the IPv4 Routed /24 Topology Dataset from the Center for Applied Internet Data Analysis (CAIDA), we provide a taxonomy of traceroute results, including anomalous and unexpected artifacts. We analyze the distribution of the observed artifacts and attempt to find attribution to the cause of each. Finally, we provide a longitudinal analysis of multi-protocol label switching in order to explore possible explanations for unexplained artifacts. Approved for public release; distribution is unlimited Showing items related by title, author, creator and subject. Ivanov, L.M.; Collins, C.A.; Margolina, T.M. (2012-08);Recent analyses of observations and ocean model outputs have revealed coherent low-frequency quasizonal jets in observed sea surface height (SSH) anomaly and model velocity fields. The jets were latent, that is, they ... Phua, Weiyou Nicholas (Monterey, California: Naval Postgraduate School, 2015-09);For all purposes and intents, being able to infer the topology of a network is crucial to both operators and adversaries alike. Tracer-oute is a common active probing technique but it may be subverted by deceptive responses. ... Trassare, Samuel T. (Monterey, California: Naval Postgraduate School, 2013-03);Adversaries scan Department of Defense networks looking for vulnerabilities that allow surveillance or the embedding of destructive malware weapons. In cyberspace, adversaries either actively probe or passively observe ...
Reverse engineering is regarded as one of the most difficult specialties in the hacker community. The deconstruction and analysis of software and systems to understand their inner workings is a complex task. It requires a thorough understanding of kernel functions, knowledge of machine code and experience using disassemblers. While a number of reverse engineering tools exist, a modification of tracing tool DTrace has led to the creation of the robust reverse engineering framework known as RE:Trace… Uncovering Windows Registry Data and the Latest Mac Artifacts Towards a Working Definition and Classification for Automation in Digital Forensics Quantifying Data Volatility for IoT Forensics With Examples From Contiki OS Important: No API Key Entered. Many features are not available without adding an API Key. Please go to the YouTube Feed settings page to add an API key after following these instructions.
IETF Pre-Congestion Notification (PCN) working group is working on mechanisms to ensure quality of service of voice and video traffic within a DiffServ domain in the Internet. These mechanisms operate at the domain boundary based on congestion information from within the domain. The reaction mechanisms at the boundary consist of flow admission and flow termination. In this paper, we present a simple analytical model for PCN. Closed form expressions for flow rejection and termination probabilities are provided. If the acceptance and rejection thresholds are set close to each other, thrashing may happen, that is, some flows will be terminated while others are being admitted. We study the sensitivity of the thrashing probability to various system parameters. This will help network service providers to set the parameters accordingly.
Communication in VANET is fully responsible on exchange of information among various vehicles also called as nodes present in network. A vehicle can be used to provide safety, security and emergency alert to other vehicles with the help of VANET. The network use information received from different vehicles to perform majority of decision. But there can be one or more node who may behave as Malicious or selfish node to get advantage over other vehicles. A misbehaving node may transmit false alert, alter message, create congestion in network, drop, delay and send identical packets more than once. Hence it is very critical and absolutely necessary to detect misbehavior as it lead to catastrophic consequences. The paper organizes as follows: the next section contain about information about VANET, the categories of misbehavior detection techniques is briefly explained followed by various research performed by experts for detecting misbehavior in VANET. At last the paper concludes with various research scope to make VANET more secure and reliable.
A new and very dangerous computer virus called Paradise Ransomware is infecting various computers right now. It is classified as a ransomware and can be detrimental to the state of your computer and personal files. It is a typical ransomware virus by all possible qualities but one – it seems like this virus might be operating as RaaS. RaaS stands for “Ransomware as a Service”. That means all files and infections methods of this virus were developed by a professional programers but they left distribution to someone else. I.e. anyone can purchase a “licence” of Paradise ransomware virus, get an access to all files and instructions how it should be deployed and distribute it itself. It’s kind of an affiliate network between cyber criminals. Profits of ransom payments usually are split between those two parties. Cyber security experts have examined this particular infection and discovered that it enters computers using hacked Remote Desktop services. When inside of the computer, this virus will restart automatically and run itself as an administrator to be able to perform all needed processes. It is using an unique and extremely powerful RSA cryptography, which is really difficult to decrypt. Paradise Ransomware will automatically generate an unique RSA-1024 key and encrypt all of your most important files with it. This virus is associated with 3 different email addresses: [email protected], [email protected] and [email protected] Either one of them can be used as in the encryption extension which is added to the end of every encrypted file of yours. Paradise ransomware employs complicated extension -id-[affiliate_id].[affiliate_email].paradise. For ex., if you are have a file “text.doc”, after the encryption it might be displayed as “[email protected]”. Once this extension is added to the file, you won’t be able to open it or use in any other reasonable way. Different from other ransomware infections, encryption process of Paradise ransomware is really slow. That means you can notice it in the process and get ahead of it, before all of your files are encrypted. If the ransomware successfully encrypts most of the files on your computer, you will spot a new files on every folder called “#DECRYPT MY FILES#.txt”. It is a ransom note – instructions how to pay the ransom and decrypt files. Original text of the message: Your important files produced on this computer have been encrypted due a security problem If you want to restore them, write us to the e-mail: [email protected] You have to pay for decryption in Bitcoins. The price depends on how fast you write to us. After payment we will send you the decryption tool that will decrypt all your files. [FREE DECRYPTION AS GUARANTEE] Before paying you can send to us up to 3 files for free decryption. Please note that files must NOT contain valuable information and their total size must be less than 1Mb [HOW TO OBTAIN BITCOINS] The easiest way to buy bitcoin is LocalBitcoins site. You have to register, click Buy bitcoins and select the seller by payment method and price Do not rename encrypted files Do not try to decrypt your data using third party software, it may cause permanent data loss If you not write on e-mail in 36 hours – your key has been deleted and you cant decrypt your files Hackers behind Paradise ransomware are offering you to send them 3 encrypted files, so they could decrypt them and prove that they have a method to do that, encouraging you to pay the ransom this way. You are also directed to make a payment via Bitcoins and contact them via given email address. Now, there are two possible ways to solve Paradise ransomware problem. You can pay the ransom and expect that ransomware developers will keep their promise or restore files from a backup. Unless other ransomware viruses, Paradise is not eliminating shadow volume copies, so if you have any kind of backup file that was made before the encryption, you will be able to restore your system from it. Here’s instructions how to do that: How to restore a system. Update: A decrypter was developed and released by ransomware experts at Emsisoft, here is the link. One way or another, you need to make sure that the virus is no longer on your computer. That can be done by scanning a system with a decent anti-malware software, such as Spyhunter. Paradise Ransomware quicklinks - Automatic Malware removal tools - How to recover Paradise Ransomware encrypted files and remove the virus - Step 1. Restore system into last known good state using system restore - 1. Reboot your computer to Safe Mode with Command Prompt: - 2.Restore System files and settings. - Step 4. Use Data Recovery programs to recover Paradise Ransomware encrypted files Automatic Malware removal tools How to recover Paradise Ransomware encrypted files and remove the virus Step 1. Restore system into last known good state using system restore 1. Reboot your computer to Safe Mode with Command Prompt: for Windows 7 / Vista/ XP - Start → Shutdown → Restart → OK. - Press F8 key repeatedly until Advanced Boot Options window appears. - Choose Safe Mode with Command Prompt. for Windows 8 / 10 - Press Power at Windows login screen. Then press and hold Shift key and click Restart. - Choose Troubleshoot → Advanced Options → Startup Settings and click Restart. - When it loads, select Enable Safe Mode with Command Prompt from the list of Startup Settings. 2.Restore System files and settings. - When Command Prompt mode loads, enter cd restore and press Enter. - Then enter rstrui.exe and press Enter again. - Click “Next” in the windows that appeared. - Select one of the Restore Points that are available before Paradise Ransomware has infiltrated to your system and then click “Next”. - To start System restore click “Yes”. Step 2. Complete removal of Paradise RansomwareAfter restoring your system, it is recommended to scan your computer with an anti-malware program, like Spyhunter and remove all malicious files related to Paradise Ransomware. You can check other tools here. Step 3. Restore Paradise Ransomware affected files using Shadow Volume CopiesIf you do not use System Restore option on your operating system, there is a chance to use shadow copy snapshots. They store copies of your files that point of time when the system restore snapshot was created. Usually Paradise Ransomware tries to delete all possible Shadow Volume Copies, so this methods may not work on all computers. However, it may fail to do so. Shadow Volume Copies are only available with Windows XP Service Pack 2, Windows Vista, Windows 7, and Windows 8. There are two ways to retrieve your files via Shadow Volume Copy. You can do it using native Windows Previous Versions or via Shadow Explorer. a) Native Windows Previous Versions Right-click on an encrypted file and select Properties → Previous versions tab. Now you will see all available copies of that particular file and the time when it was stored in a Shadow Volume Copy. Choose the version of the file you want to retrieve and click Copy if you want to save it to some directory of your own, or Restore if you want to replace existing, encrypted file. If you want to see the content of file first, just click Open. b) Shadow Explorer It is a program that can be found online for free. You can download either a full or a portable version of Shadow Explorer. Open the program. On the left top corner select the drive where the file you are looking for is a stored. You will see all folders on that drive. To retrieve a whole folder, right-click on it and select “Export”. Then choose where you want it to be stored. Step 4. Use Data Recovery programs to recover Paradise Ransomware encrypted filesThere are several data recovery programs that might recover encrypted files as well. This does not work in all cases but you can try this: - We suggest using another PC and connect the infected hard drive as slave. It is still possible to do this on infected PC though. - Download a data recovery program. - Install and scan for recently deleted files.
Most ChePro samples are downloaders which need other files to complete the infection. Usually they install banking malware that will take screenshots, capture keyboard strokes, and read the content of the clipboard. Malware in this family can be used to attack virtually any Internet banking service. This malware implements new techniques for the purpose of avoiding detection for as long as possible. Several Trojans use geolocation or query the operating system for the user’s timezone and Microsoft Windows version. The Trojans will not attempt to complete an infection if the computer’s IP address is not Brazilian, the operating system is set to a timezone that is outside of Brazil, or the system language is not Portuguese (Brazil). Top 10 countries with most attacked users (% of total attacks) * Percentage of all unique Kaspersky users attacked by this malware |Find out the statistics of the threats spreading in your region|
Chapter 18. Source Code Auditing: Finding Vulnerabilities in C-Based Languages Auditing software with the source code is often the most effective way to discover new vulnerabilities. A large amount of widely deployed software is open source, and some commercial vendors have shared their operating system source code with the public. With some experience, it is possible to detect obvious flaws quickly and more subtle flaws with time. Although binary analysis is always a possibility, the availability of source code makes auditing much easier. This chapter covers auditing source code written in C-based languages for both simple and subtle vulnerabilities, and mainly focuses on detecting memory-corruption vulnerabilities. Many people audit source code, and each has his or her own reasons for doing so. Some audit code as part of their jobs or as a hobby, whereas others simply want an idea of the security of the applications they run on their systems. There are undoubtedly people out there who audit source code to find ways to break into systems. Whatever the reason for auditing, source code review is arguably the best way to discover vulnerabilities in applications. If the source code is available, use it. The argument about whether it's more difficult to find bugs or to exploit them has been thrown around a fair bit, and cases can be made for either side. Some vulnerabilities are extremely obvious to anyone reading the source but turn out to be nearly unexploitable in any practical situation. ...
Using command prompt "attrib" to check for Viruses or Malware Microsoft Command Prompt "attrib" is a very useful tool to check if your hard drives even your flashdisks have been infected by a virus. You will know if a Malware is inside your hard drive just by looking at the attributes of each files and the file that has the attributes of +s +h +r The function of attrib is to set and remove file attributes (read-only, archive, system and hidden). To start attrib - Go to Start Menu > Run - Type cmd (cmd stands for command prompt) - Press Enter key The Command Prompt will appear showing us where is our location in the directory. To use attrib - Go to the root directory first by typing cd\(because this is always the target of Malware / Virus) 2. Type attrib and press Enter key In this example, I have two files that are considered as malware. Note that there are two files which I outlined in red (SilentSoftech.exe and autorun.inf). Since you cannot see this file nor delete it (because the attributes that was set on these files are +s +h +r) - +s - meaning it is a system file (which also means that you cannot delete it just by using the delete command) - +h - means it is hidden (so you cannot delete it) - +r - means it is a read only file ( which also means that you cannot delete it just by using the delete command) Now we need to set the attributes of autorun.inf to -s -h -r (so that we can manually delete it) - Type attrib -s -h -r autorun.inf ( be sure to include -s -h -r because you cannot change the attributes using only -s or -h or -r alone) - Type attrib again to check if your changes have been committed - If the autorun.inf file has no more attributes, you can now delete it by typing del autorun.inf - Since SilentSoftech.exe is a malware you can remove its attributes by doing step 1 and step 3(just change the filename) ex. attrib -s -h -r silentsoftech.exe There you have it!!!! NOTE : when autorun.inf keeps coming back even if you already deleted it, be sure to check your Task Manager by pressing CTRL + ALT + DELETE ( a virus is still running as a process that's why you cannot delete it. KILL the process first by selecting it and clicking End Process. NOTE: You can also apply the attrib -s -h -r command to all the partition of your computer, drive D: drive E: drive F: (all of your drives). For example. for drive D, just type "D:" (minus the double quote) then you can see that your current drive is D.. type there the command "attrib -s -h -r *.exe" for exe files and "attrib -s -h -r *.inf" and then delete the file by "del autorun.inf". Hope this helps!!!!! :) Jah bles! NOTE: If you want to have a more detailed information regarding How to delete a virus visit my other hub.. HOW TO DELETE A VIRUS IN YOUR USB/FLASHDISK More by this Author Cha - Cha or Charter Change is the process involved in amending the 1987 Philippine Contitution. Charter Change, also recognized as "Cha-Cha" in the Philippines, refers to the political and additional...
Harlan Carvey is a computer forensics author, researcher and practitioner. He has written several books and tools focusing on Windows systems and incident response. His computer forensics blog Windows Incident Response is updated on a regular basis. - Windows IR/CF Tools - Hosted on Sourceforge, includes files for the Forensic Server Project and Windows Memory Analysis. - Windows Forensic Analysis (forthcoming) - Windows Forensics and Incident Recovery - A Study of Video Teleconferencing Traffic on a TCP/IP Network
Security is taken seriously when dealing with open source accountability. And it’s no different when developers embrace using Docker, by building applications locally right up to production deployments. A big responsibility that comes with being deployed in so many places is a serious focus on the security of Docker as a project and a platform. As a result, we’ve decided to discuss in part 5 of our Docker Tutorial Series, the key areas of security Docker focuses on and why they matter to overall Docker Security. Given that Docker is typically an extension of LXCs, it easily uses the security features of LXCs, too. In the first part of this series, we discussed that a docker run command is executed to spin up and run a container. However, here’s what really happens: 1. A docker run command is initiated. 2. Docker runs lxc-start to execute the run command. 3. A set of namespaces and control groups are created for the container by lxc-start. For those who are not aware of the namespace and control group concepts, namespace is the first level of isolation whereas no two containers can view or control the processes running in each of them. Each container is assigned a separate network stack, and, hence, one container does not get access to the sockets of another container. To allow IP traffic between the containers, you must specify public IP ports for the container. Control Groups, the key component, has the following functionalities: - Is responsible for resource accounting and limiting. - Provides metrics pertaining to the CPU, memory, I/O and network. - Tries to avoid certain DoS attacks. - Has a significance on multi-tenant platforms. Docker Daemon’s Attack Surface Docker daemon runs with root privileges, which implies there are some issues that need extra care. Some interesting points include the following: - Control of Docker daemon should only be given to authorized users as Docker allows directory sharing with a guest container without limiting access rights. - The REST API endpoint now supports UNIX sockets, thereby preventing cross-site-scripting attacks. - REST API can be exposed over HTTP using appropriate trusted networks and VPNs. - Run Docker exclusively on a server (when done), isolating all other services. Some key Docker security features include the following: - Processes, when run as non-privileged users in the containers, maintain a good level of security. - Apparmor, SELinux, GRSEC solutions can be used for an extra layer of security. - There’s a capability to inherit security features from other containerization systems. For managing several processes relating to authorization and security, Docker provides a REST API. The following table lists some commands pertaining to this API for maintaining functions related to security. So, if you are a developer and security in an open source environment is near the top of your list of concerns, let’s continue the conversation about Docker Security. You can do so by contacting us at [email protected], or learn more by visiting us at www.flux7.com/docker.
Professor Dongwon Lee Dongwon Lee is a full professor in the College of Information Sciences and Technology (a.k.a. iSchool) at Penn State University, USA, an ACM Distinguished Scientist (2019), and Fulbright Cyber Security Scholar (2022). Before starting at Penn State, he has worked at AT&T Bell Labs and obtained his Ph.D. in Computer Science from UCLA. From 2015 to 2017, he has also served as a Program Director at National Science Foundation (NSF), co-managing cybersecurity research and education programs and contributing to the development of national research priorities. In general, he researches on the problems in the intersections of data science, machine learning, and cybersecurity. Since 2017, in particular, he has led the SysFake project at Penn State, investigating computational and socio-technical solutions to better combat fake news. More details of his research can be found at: http://pike.psu.edu/. During the academic year of 2022-2023, he is visiting the University of Cambridge, supported by the Fulbright program, to perform collaborative research on fake news and deepfakes.
CMMC Practice CM.L2-3.4.8 – Application Execution Policy: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software. Links to Publicly Available Resources Discussion [NIST SP 800-171 R2] The process used to identify software programs that are not authorized to execute on systems is commonly referred to as blacklisting. The process used to identify software programs that are authorized to execute on systems is commonly referred to as whitelisting. Whitelisting is the stronger of the two policies for restricting software program execution. In addition to whitelisting, organizations consider verifying the integrity of whitelisted software programs using, for example, cryptographic checksums, digital signatures, or hash functions. Verification of whitelisted software can occur either prior to execution or at system startup. NIST SP 800-167 provides guidance on application whitelisting. Organizations should determine their blacklisting or whitelisting policy and configure the system to manage software that is allowed to run. Blacklisting or deny-by-exception allows all software to run except if on an unauthorized software list such as what is maintained in antivirus solutions. Whitelisting or permit-by-exception does not allow any software to run except if on an authorized software list. The stronger policy of the two is whitelisting. This practice, CM.L2-3.4.8, requires the implementation of allow-lists and deny-lists for application software. It leverages CM.L2-3.4.1, which requires the organization to establish and maintain software inventories. This practice, CM.L2-3.4.8, also extends CM.L2-3.4.9, which only requires control and monitoring of any user installed software.
In today's cloud-based world, organizations face the challenge of managing complex and distributed network architectures. However, with the introduction of AWS Transit Gateway, network connectivity and management have become simpler and more efficient. In this blog post, we will explore how AWS Transit Gateway can centralize your infrastructure, providing seamless connectivity between Amazon Virtual Private Clouds (VPCs), on-premises networks, and other AWS services. In a centralized approach, you establish a dedicated networking account or a central network services VPC (Virtual Private Cloud) that acts as a hub for connecting multiple AWS accounts. This central account serves as the network management and security hub, where you can configure and manage connectivity, security policies, and network services. This approach offers centralized control, easier management of network policies, and the ability to enforce consistent security measures across accounts. It is suitable for organizations that require strict network governance, compliance, and centralized visibility and control over network traffic. Using AWS Transit Gateway instead of VPC peering connections is a valid and recommended approach for implementing centralized network connectivity between multiple AWS member accounts under one AWS Organization. AWS Transit Gateway simplifies network connectivity and provides centralized control and management. Here's an updated step-by-step guide: Step 1: Set up AWS Organization Create an AWS Organization if you haven't already. Define your organizational units (OUs) to group and manage your member accounts effectively. Step 2: Establish Networking Account Designate one account as the Networking Account or Hub account responsible for centralized network management. Configure necessary VPCs, subnets, and networking resources in the Networking Account. Step 3: Establish Member Accounts Create individual member accounts within your AWS Organization for each team or business unit. Each member account should have its own VPCs, subnets, and networking resources. Step 4: Set up AWS Transit Gateway Create an AWS Transit Gateway in the Networking Account. Associate the VPCs in each member account with the Transit Gateway. Step 5: Configure Routing Configure route tables in the Networking Account's VPC and associate them with the Transit Gateway. Define and propagate appropriate routes to enable connectivity between member account VPCs and the Networking Account. Step 6: Implement Security Measures Define and enforce security policies consistently across member accounts using AWS Identity and Access Management (IAM) and AWS Organizations service control policies (SCPs). Utilize security groups, network ACLs, and AWS Web Application Firewall (WAF) to secure network traffic. Step 7: Integrate with On-Premises Environment Establish a secure connection between the on-premises environment and the Networking Account using AWS Direct Connect or VPN. Configure appropriate routing and security measures to enable connectivity between on-premises and AWS accounts. - AWS Direct Connect AWS Direct Connect is a high-speed, low-latency connection that allows you to access public and private AWS Cloud services from your local (on-premises) infrastructure. The connection is enabled via dedicated lines and bypasses the public Internet to help reduce network unpredictability and congestion. AWS Direct Connect does not encrypt your traffic that is in transit by default. To encrypt the data in transit that traverses AWS Direct Connect, you must use the transit encryption options for that service. - AWS Site-to-Site VPN AWS Site-to-Site VPN is a hardware IPsec VPN that enables you to create an encrypted connection between Amazon VPC and your private IT infrastructure over the public Internet. VPN connections allow you to extend existing on-premises networks to your VPC as if they were running in your infrastructure. Note: Secure your AWS Direct Connect connection with AWS VPN: By combining AWS Direct Connect connections with the AWS Site-to-Site VPN, you can leverage the benefits of both technologies. This solution offers the advantages of the secure encryption provided by the end-to-end AWS VPN IPSec connection while also capitalizing on the low latency and increased bandwidth offered by AWS Direct Connect. The result is a more reliable and consistent network experience compared to VPN connections that rely solely on the internet. This combination ensures that data flowing through the network remains secure while benefiting from improved network performance, providing an optimal solution for your connectivity needs. Step 8: Monitor and Manage Implement monitoring and logging solutions, such as Amazon CloudWatch and AWS CloudTrail, to track network activity and security events. Continuously monitor and manage network resources, scaling and optimizing as needed. AWS Transit Gateway revolutionizes network connectivity by providing a centralized and scalable solution for managing complex network architectures. With simplified VPC and on-premises connectivity, enhanced security, and streamlined network management, organizations can achieve significant operational efficiencies and improve their overall network performance. By adopting AWS Transit Gateway, organizations can centralize their infrastructure, simplify network management, and scale their connectivity as their business grows. Whether it's connecting VPCs, extending connectivity to on-premises networks, or implementing robust security measures, Transit Gateway offers the flexibility and power needed to meet the demands of modern cloud-based environments. Embrace the power of AWS Transit Gateway and unlock new possibilities for simplifying and optimizing your network connectivity. Start centralizing your infrastructure today and experience the benefits of a streamlined and scalable network architecture.
The Security group opens on the Logs page. Here you can view events that have occurred on the router, specify what types of events must be written into the log, and transfer the log to a specified location (for example, to a certain IP-address). On the Block Sites page you can enter a list of sites that will be blocked by the router basing on keywords or domain names. This blocking can be permanent or scheduled. The Firewall Rules page is where you specify new rules of the integrated SPI firewall and view/delete/edit the existing ones. If you don’t find the required service when writing your firewall rules, you can specify them on the Services page. The Schedule page is for scheduling the router’s rules. To send a notification about an attack on the router or to send a copy of the log by a schedule, you should write the settings of the integrated mail service on the E-mail page.
Antivirus software installed on your machine can detect malware, if it knows the signature or can detect the unique pattern for malware. On the other hand, malware attached to an email or downloaded from a website can also be tagged as malicious using heuristic technology. Some heuristic detection methods involve looking into some readable and printable strings within the file, such as the names of APIs (Application Programming Interface) that can be used for malicious activities. These APIs are not malicious by themselves, but a combination of them in a single executable file can trigger the heuristic detection and flag the file as malicious. Some heuristic detection methods also use the entropy of the file in order to flag it as suspicious. Entropy is a measure of how the bytes are arranged within the file. A high entropy value tells us that a file is encrypted, which can also trigger heurisitic detection. We found a new downloader that tries to evade heuristic detection by minimizing the exposure of some important APIs. Moreover, the whole file is not encrypted, which helps to avoid entropy-based heuristic detection. This downloader is detected as W32/Onkod. Enumerating the printable strings within Onkod shows no sign of API names and no URL links (see Figure 1) that can suggest malicious intent. The only noticeable element is the string “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:22.0) Gecko/20100101 Firefox/22.0“. The said string indicates that it is going to use some sort of browsing or internet activity. We will refer back to this list in the following sections. The boxed strings will play some important roles in the malware’s execution, as we will show later on. Executing the Malware After executing the file, we detected some internet activities which suggest that there should be some internet-related APIs triggered within the code. However, this was not shown in the list of strings. The internet-related APIs are encrypted, as well as the other APIs needed by the malware. After the decryption, we can clearly see the names of these APIs, including those that the malware uses for its internet connections. These APIs are resolved using the GetProcAddress API (see Figure 2). After resolving the needed APIs, the malware downloads the file “av.exe” (see Figure 3) and saves it to the %Temp% folder using a 10-numeric pseudo-random filename, such as “4712434768.exe”. The User-Agent (“Mozilla/5.0 (Windows NT 6.1; WOW64; rv:22.0) Gecko/20100101 Firefox/22.0”) that was used in downloading the file can be found in the list of strings in Figure 1. During execution, the downloaded file that is saved to the %Temp% folder is executed. The downloaded file then drops another malware, which is a variant of the FakeAV trojan. Finally, W32/Onkod displays a message box, which is shown in Figure 4. The title and message can be also found in the list of strings shown in Figure 1. Below is the fake error message that signifies the completion of the downloader’s process. This is displayed while the FakeAV variant is now running in the background. W32/Onkod avoids heuristic detection by hiding its suspicious properties. However, digging a little deeper into the code reveals that it is capable of doing more damage into a system once it is able to pass through this layer of security. If the malware is already running, always be on the lookout for some of its visible symptoms, such as its fake error message and unwanted internet activities. In order to avoid being infected by these types of malware, always take extreme care when executing normal-looking executable files. Better yet, do not execute any file that comes from an email or from an untrusted website. Leave a reply
A new variant of ransomware virus has been discovered by cyber threat analysts. It appends the .gehad file extension to encrypted files. This ransomware targets computers running MS Windows by spam emails, malicious software or manually installing the ransomware. This blog post will provide you a brief summary of information related to this ransomware virus and how to restore (decrypt) encrypted documents, photos and music for free. Once installed, the Gehad ransomware begins searching for attached disks and even networked drives containing documents, images, web application-related files, videos, archives, music and database. It is able to encrypt almost all types of files, including common as: .1, .xlsm, .fos, .p7b, .ws, .xld, .wn, .xls, .fpk, .docm, .vcf, .zdb, .pem, .pst, .doc, .wma, .bc7, .wm, .desc, .dng, .hplg, .wri, .wpb, .blob, .wpl, .raf, .ibank, .sid, .vpp_pc, .x3d, .webp, .srf, .mov, .wbk, .kf, .css, .xlk, .raw, .pak, .crt, .ods, .xlgc, .bkp, .xbdoc, .crw, .avi, .hkdb, .cr2, .mp4, .ysp, .xf, .slm, .wbd, .wp6, .srw, .wbz, .wpd, .lrf, .vfs0, .wmv, .accdb, .m2, .layout, .pkpass, .hkx, .sie, .d3dbsp, .kdc, .ltx, .iwi, .odt, .bc6, .ybk, .wpw, .ff, .0, .xlsx, .wdp, .vpk, .t12, .lbf, .vtf, .tor, .xdb, .esm, .xar, .xyp, .3dm, .xpm, .orf, .wmv, .csv, .zip, .pfx, .qic, .wpa, .cfr, .py, .wcf, .wbc, .xwp, .wgz, .xls, .dazip, .wpt, .wb2, .xlsx, .ntl, .wma, .wsd, .p7c, .map, .wire, .3ds, .webdoc, .qdf, .y, .wp7, .rtf, .rim, .sidd, .zw, .xlsm, .z, .wav, .wsc, .dcr, .iwd, .pptm, .png, .snx, .wmd, .x, .rgss3a, .vdf, .psk, .1st, .wotreplay, .7z, .t13, .bay, .wbm, .wpg, .xll, .zdc, .m3u, .dwg, .xlsb, .rwl, .sql, .erf, .jpg, .itdb, .sav, .ptx, .mdbackup, .wps, .dbf, .wp5, .wbmp, .jpeg, .rb, .big, .der, .xxx, .x3f, .sidn, .lvl, .xyw, .zip, .tax, .rw2, .m4a, .wps, .js, .mdb, .xmind, .pdd, .dba, .xmmap, .menu, .bar, .wmf, .syncdb, wallet, .arw, .wpe, .xbplate, .mdf, .txt, .gho, .bkf, .p12, .ncf, .das, .yal, .rofl, .zi, .epk, .pef, .odb With the encryption work done, all encrypted personal files will now have the new .gehad extension appended to them. Gehad ransomware drops a file called ‘_readme.txt’. This file contains a ransom note that is written in the English language. The ransom note directs victims to make payment to a cryptocurrency wallet in exchange for the keys needed to decrypt files. Don't worry, you can return all your files! All your files like photos, databases, documents and other important are encrypted with strongest encryption and unique key. The only method of recovering files is to purchase decrypt tool and unique key for you. This software will decrypt all your encrypted files. What guarantees you have? You can send one of your encrypted file from your PC and we decrypt it for free. But we can decrypt only 1 file for free. File must not contain valuable information. You can get and look video overview decrypt tool: https://we.tl/t-514KtsAKtH Price of private key and decrypt software is $980. Discount 50% available if you contact us first 72 hours, that's price for you is $490. Please note that you'll never restore your data without payment. Check your e-mail "Spam" or "Junk" folder if you don't get answer more than 6 hours. |Type||Filecoder, File locker, Ransomware, Crypto virus, Crypto malware| |Encrypted files extension||.gehad| |Ransom amount||$980 in Bitcoins| |Symptoms||Unable to open files. Odd, new or missing file extensions. Files named such as ‘_readme.txt’, or ‘_readme” in every folder with an encrypted file.| |Distribution methods||Spam mails that contain malicious links. Drive-by downloading (when a user unknowingly visits an infected web site and then malicious software is installed without the user’s knowledge). Social media posts (they can be used to force users to download malicious software with a built-in ransomware downloader or click a misleading link). Torrent web pages.| |Removal||To remove Gehad ransomware use the removal guide| |Decryption||To decrypt Gehad ransomware use the steps| In the tutorial below, I have outlined few methods that you can use to remove Gehad ransomware from your personal computer and restore .gehad files from a shadow volume copies or using file recover apps. - How to remove Gehad crypto virus - How to decrypt .gehad files - How to restore .gehad files - How to protect your system from Gehad crypto virus? - Finish words How to remove Gehad crypto virus There are not many good free antimalware applications with high detection ratio. The effectiveness of malicious software removal utilities depends on various factors, mostly on how often their virus/malware signatures DB are updated in order to effectively detect modern worms, trojans, ransomware and other malware. We suggest to run several applications, not just one. These applications that listed below will allow you uninstall all components of the Gehad ransomware from your disk and Windows registry. How to remove Gehad ransomware virus with Zemana Free Zemana Free is a malicious software scanner that is very effective for detecting and removing Gehad ransomware. The steps below will explain how to download, install, and use Zemana Free to scan your computer and remove ransomware, spyware, adware, malware, trojans, worms for free. - Download Zemana Anti-Malware (ZAM) on your machine from the link below. Author: Zemana Ltd Category: Security tools Update: July 16, 2019 - At the download page, click on the Download button. Your web-browser will open the “Save as” dialog box. Please save it onto your Windows desktop. - After the download is done, please close all applications and open windows on your PC system. Next, run a file named Zemana.AntiMalware.Setup. - This will run the “Setup wizard” of Zemana Free onto your personal computer. Follow the prompts and don’t make any changes to default settings. - When the Setup wizard has finished installing, the Zemana Free will start and show the main window. - Further, click the “Scan” button for scanning your system for the Gehad ransomware virus related files, folders and registry keys. This procedure can take quite a while, so please be patient. While the Zemana program is scanning, you can see how many objects it has identified as threat. - After the system scan is done, Zemana AntiMalware will prepare a list of unwanted apps and crypto virus. - You may delete items (move to Quarantine) by simply click the “Next” button. The utility will uninstall Gehad ransomware virus related files, folders and registry keys. When the clean up is finished, you may be prompted to reboot the PC system. - Close the Zemana and continue with the next step. Remove Gehad virus with MalwareBytes Get rid of Gehad ransomware virus manually is difficult and often the ransomware is not completely removed. Therefore, we recommend you to run the MalwareBytes Free which are fully clean your computer. Moreover, this free program will allow you to uninstall malware, potentially unwanted programs, toolbars and adware software that your machine may be infected too. Please go to the link below to download MalwareBytes AntiMalware (MBAM). Save it on your Windows desktop or in any other place. Category: Security tools Update: April 15, 2020 Once the downloading process is finished, close all windows on your computer. Further, run the file named mb3-setup. If the “User Account Control” prompt pops up as displayed in the following example, click the “Yes” button. It will show the “Setup wizard” that will allow you install MalwareBytes on the machine. Follow the prompts and do not make any changes to default settings. Once setup is finished successfully, click Finish button. Then MalwareBytes AntiMalware will automatically launch and you can see its main window as displayed in the following example. Next, click the “Scan Now” button to begin checking your machine for the Gehad crypto virus, other kinds of potential threats like malware and trojans. When a threat is found, the number of the security threats will change accordingly. After MalwareBytes Free completes the scan, MalwareBytes Free will open a scan report. Review the report and then press “Quarantine Selected” button. The MalwareBytes Anti-Malware (MBAM) will uninstall Gehad crypto malware, other kinds of potential threats like malicious software and trojans and add items to the Quarantine. Once disinfection is finished, you can be prompted to restart your machine. We suggest you look at the following video, which completely explains the procedure of using the MalwareBytes Free to delete browser hijackers, adware and other malicious software. Use KVRT to remove Gehad ransomware virus KVRT is a free removal utility that can scan your PC system for a wide range of security threats like the Gehad crypto virus, adware software, PUPs as well as other malicious software. It will perform a deep scan of your PC including hard drives and Microsoft Windows registry. After a malware is found, it will help you to remove all found threats from your computer with a simple click. Download Kaspersky virus removal tool (KVRT) by clicking on the link below. Save it to your Desktop so that you can access the file easily. Author: Kaspersky® lab Category: Security tools Update: March 5, 2018 When downloading is finished, double-click on the KVRT icon. Once initialization procedure is complete, you will see the KVRT screen as on the image below. Click Change Parameters and set a check near all your drives. Click OK to close the Parameters window. Next click Start scan button . Kaspersky virus removal tool tool will start scanning the whole system to find out Gehad crypto malware and other known infections. A system scan can take anywhere from 5 to 30 minutes, depending on your system. While the KVRT program is scanning, you may see how many objects it has identified as threat. When the checking is complete, Kaspersky virus removal tool will open a screen which contains a list of malicious software that has been detected like below. You may remove threats (move to Quarantine) by simply click on Continue to begin a cleaning task. How to decrypt .gehad files The encryption algorithm is so strong that it’s practically impossible to decrypt .gehad files without the actual encryption key. The bad news is that the only way to get your files back is to pay ($980 in Bitcoins) makers of the Gehad ransomware virus for a copy of the private (encryption) key. Should you pay the ransom? A majority of experienced security professionals will reply immediately that you should never pay a ransom if infected by ransomware! If you choose to pay the ransom, there is no 100% guarantee that you can decrypt all photos, documents and music! With some variants of Gehad ransomware, it is possible to decrypt encrypted files using free tools listed below. Michael Gillespie (@) released the Gehad decryption tool named STOPDecrypter. It can decrypt .Gehad files if they were encrypted by one of the known OFFLINE KEY’s retrieved by Michael Gillespie. Please check the twitter post for more info. STOPDecrypter is a program that can be used for Gehad files decryption. One of the biggest advantages of using STOPDecrypter is that is free and easy to use. Also, it constantly keeps updating its ‘OFFLINE KEYs’ DB. Let’s see how to install STOPDecrypter and decrypt .Gehad files using this free tool. - Installing the STOPDecrypter is simple. First you will need to download STOPDecrypter on your Windows Desktop from the following link. - After the downloading process is done, close all applications and windows on your machine. Open a file location. Right-click on the icon that’s named STOPDecrypter.zip. - Further, select ‘Extract all’ and follow the prompts. - Once the extraction process is finished, run STOPDecrypter. Select Directory and press Decrypt button. How to restore .gehad files In some cases, you can recover files encrypted by Gehad crypto virus. Try both methods. Important to understand that we cannot guarantee that you will be able to recover all encrypted files. Use ShadowExplorer to restore .gehad files In order to restore .gehad files encrypted by the Gehad crypto malware from Shadow Volume Copies you can use a tool named ShadowExplorer. We recommend to use this method as it is easier to find and restore the previous versions of the encrypted files you need in an easy-to-use interface. First, click the link below, then click the ‘Download’ button in order to download the latest version of ShadowExplorer. Category: Security tools Update: September 15, 2019 Once the download is complete, open a directory in which you saved it. Right click to ShadowExplorer-0.9-portable and select Extract all. Follow the prompts. Next please open the ShadowExplorerPortable folder as shown on the screen below. Launch the ShadowExplorer tool and then choose the disk (1) and the date (2) that you want to restore the shadow copy of file(s) encrypted by the Gehad ransomware as displayed on the image below. Now navigate to the file or folder that you want to recover. When ready right-click on it and click ‘Export’ button as displayed in the figure below. Restore .gehad files with PhotoRec Before a file is encrypted, the Gehad crypto virus makes a copy of this file, encrypts it, and then deletes the original file. This can allow you to recover your personal files using file restore apps such as PhotoRec. Download PhotoRec on your Microsoft Windows Desktop by clicking on the link below. Category: Security tools Update: March 1, 2018 When the downloading process is done, open a directory in which you saved it. Right click to testdisk-7.0.win and choose Extract all. Follow the prompts. Next please open the testdisk-7.0 folder as shown below. Double click on qphotorec_win to run PhotoRec for Microsoft Windows. It’ll display a screen as shown in the following example. Choose a drive to recover such as the one below. You will see a list of available partitions. Select a partition that holds encrypted photos, documents and music as on the image below. Click File Formats button and specify file types to restore. You can to enable or disable the recovery of certain file types. When this is complete, click OK button. Next, click Browse button to choose where recovered personal files should be written, then click Search. Count of recovered files is updated in real time. All restored files are written in a folder that you have chosen on the previous step. You can to access the files even if the recovery process is not finished. When the recovery is done, press on Quit button. Next, open the directory where recovered files are stored. You will see a contents as on the image below. All recovered personal files are written in recup_dir.1, recup_dir.2 … sub-directories. If you’re looking for a specific file, then you can to sort your recovered files by extension and/or date/time. How to protect your system from Gehad crypto virus? Most antivirus programs already have built-in protection system against the ransomware virus. Therefore, if your computer does not have an antivirus program, make sure you install it. As an extra protection, run the HitmanPro.Alert. Run HitmanPro.Alert to protect your machine from Gehad ransomware HitmanPro.Alert is a small security tool. It can check the system integrity and alerts you when critical system functions are affected by malware. HitmanPro.Alert can detect, remove, and reverse ransomware effects. Visit the following page to download the latest version of HitmanPro Alert for Microsoft Windows. Save it to your Desktop. Category: Security tools Update: March 6, 2019 When downloading is complete, open the file location. You will see an icon like below. Double click the HitmanPro Alert desktop icon. After the utility is opened, you will be displayed a window where you can choose a level of protection, as shown in the following example. Now click the Install button to activate the protection. After completing the step-by-step guide above, your PC should be free from Gehad crypto virus and other malicious software. Your PC will no longer encrypt your personal files. Unfortunately, if the step-by-step tutorial does not help you, then you have caught a new crypto malware, and then the best way – ask for help here.
In the present world, we are enabled with distinct technology devices and/or systems mostly to reduce human effort in completing various tasks. To fulfill our activities within a reasonable period, we expect reliable communication system(s) to exchange significant data securely. The vehicular cloud computing (VCC) is a system to control vehicle-related data for various computations and this data helps to different vehicle operators straightaway or indirectly. However, a receiver should confirm the correctness of obtained information else it influences in erroneous ways. Recently, Zhong et al. suggested a privacy-preserving authentication model but in this paper, we identify that this scheme cannot withstand against some attacks, i.e., impersonation, modification, plain-text, and man-in-the-middle. Thus, we propose an improved message confirmation system for the VCC to protect various security attacks, e.g., replay, plain-text, impersonation, man-in-the-middle, and modification. Further, we do performance and security analysis of the suggested method. Next, we compare the proposed system with different message verification protocols and the results show that the suggested method is more secure and effective compared to other related communication protocols. © 2018 Elsevier B.V.
In this chapter, we are going to look at Apache OpenWhisk. While not strictly a Kubernetes-only project, like, say, Kubeless and Fission (which are covered in the next chapter), it can be deployed on, and take advantage of, Kubernetes. We are going to be looking at three main topics: - An overview of Apache OpenWhisk - Running Apache OpenWhisk locally using Vagrant - Running Apache OpenWhisk on Kubernetes Let's start by finding out more about OpenWhisk.
In these times of complex, advanced and persistent attacks threatening all Internet connected organizations, detecting malware and the associated activities of it is increasingly important. Defending enterprises against these kinds of adversaries is not easy. It requires a combination of people, processes and technology that entails investments of time and resources that many organizations are failing to achieve. As a result, enterprises are being compromised with impunity, and considerable damage is being inflicted in the aftermath of these attacks. This paper explores: - The details of the malware centric capabilities provided in the Fidelis advanced threat defense solution; - The mechanisms used to find and extract objects of interest, in whatever way they are transported across a network; - The components that combine to provide this capability, along with their functionality, benefits, and competitive advantages over other technologies.
As with most areas in software engineering, debugging is a crucial aspect of Android development. Properly setting up your application for debugging can save you hours of work and frustration. Unfortunately, in my experience not many beginners learn how to properly make use of the utility classes provided in the Android SDK. Unless you are an experienced developer, it is my personal belief that Android debugging should follow a pattern. This will prove beneficial for a couple reasons: It allows you to anticipate bugs down the line. Setting up your development work space for debugging will give you a head start on bugs you might encounter in the future. It gives you centralized control over the debugging process. Disorganized and sparse placement of log messages in your class can clutter your logcat output, making it difficult to interpret debugging results. The ability to toggle certain groups of log messages on/off can make your life a whole lot easier, especially if your application is complex. The Log Class For those of you who don't know, the Android SDK includes a useful logging utility class called android.util.Log. The class allows you to log messages categorized based severity; each type of logging message has its own message. Here is a listing of the message types, and their respective method calls, ordered from lowest to highest priority: - The Log.v() method is used to log verbose messages. - The Log.d() method is used to log debug messages. - The Log.i() method is used to log informational messages. - The Log.w() method is used to log warnings. - The Log.e() method is used to log errors. - The Log.wtf() method is used to log events that should never happen ("wtf" being an abbreviation for "What a Terrible Failure", of course). You can think of this method as the equivalent of Java's assert method.
New Ransomware for Android Infect Its Victims uses SMS Spam By Chaitra V M Hackers use different ways to breach into a smartphone using viruses, malware, worms, Trojan horses, phishing, etc and gain access to the personal information. Android devices are targeted by a new ransomware family by spreading to other victims by sending text messages containing malicious links to the entire contact list. This ransomware malware differs than the rest. Unlike the past ransomware malware, this one uses text messages to spread to other devices in Android. A malicious link is sent to all the contacts on the infected smartphone device by text message using this ransomware. Android devices running Android 5.1 Lollipop or the versions above are mainly targeted by this malware. The security researchers who discovered the ransomware have classified it as Android/Filecoder.C (FileCoder). After the malicious link is sent by SMS, it encrypts most user files on the device and requests a ransom. Because of the flawed encryption, the affected files are possible to decrypt without any assistance from the attacker. The malware does not encrypt files that have ‘rar’ or ‘zip’ extension. The malware is distributed via various online forums.Since July 12, 2019 the malware has been active. After few days of discovery,samples of the malware were extracted by researchers from several posts shared on XDA Developers and Reddit forum. Two Servers were used to distribute the ransomware by the developers of FileCoder’s, with malicious payloads being linked to the text messages sent to the victims’ entire contact list and from the forum’s posts The samples of the ransomware are linked with the help of QR codes which makes it faster for mobile users to get the malicious APKs on the devices and install them on the devices. The malicious app is promoted as a free sex simulator online game by the forum’s posts which should also lower the potential targets’ guard enough to get them to download and install the ransomware-ridden app on their devices. The victims contact list is used by Android/Filecoder.C and spreads further via SMS with malicious links. The ransomware has 42 versions of the message template, to maximize the reach. FileCoder spreads itself via SMS to the victim’s contact list before starting to encrypt files on all the folders on the device’s storage it can get access to, the .seven extension is appended to the original file names and the system files will be skipped. Symmetric and Asymmetric algorithms are used by Android/Filecoder.C to encrypt files. The ransomware generates a new AES key, while encrypting files. The ransomware also leaves files unencrypted if the file extension is “.rar” or “.zip” and “.jpg”, “.jpeg” and “.png” files with size more than 50MB, and with a file size less than 150 KB The FileCoder ransomware asks its victims for a Bitcoin ransomware.The ransom amount ranges between $94 to $188. A warning of 72 hours or three days is also provides to pay or lose access to the date.
The Security Edition of TheFlex has been developed for all scenarios and areas of application with increased security requirements. All functionalities of the Industry Edition are included here, in addition there are further features in two security-relevant areas: - Security from external threats - Data security Examples of application scenarios include the following: - Processing and display of sensitive and private personal data - Apps with medical information or patient data - Apps in the field of defense or security Security from external threats This point refers to external actors or malware. For example, the Security Edition can detect if the device has been compromised or if the network connection is insecure. This makes it possible either to start TheFlex directly or to prevent the website or web app from loading so that no access to internal systems or data is possible. In some applications, very sensitive data is used, some of which must be protected for legal reasons. The Security Edition of TheFlex can be used to prevent this data from being intentionally or unintentionally extracted from the user's websites or web apps. For example, screenshots of the data can be prevented or texts can be copied out. The Security Edition should be used consciously. Some security functions restrict the user and make work more difficult. It is important to weigh up whether these limitations are worth the extra security. We would be happy to support you in this decision.
12. Using the Basic Directives Directives are special attributes that apply Vue.js functionality to HTML elements in a component’s template. In this chapter, I explain how to use the basic built-in directives that Vue.js provides, which provide some of the most commonly required features in web application development. In Chapters 13 , I describe more complex directives, and in Chapter 26 , I explain how to create custom directives when the built-in ones don’t provide the features you require. Table 12-1 puts the built-in directives in context. Putting the Built-in Directives in Context What are they? The built-in ...
No doubt, the technology is secure. But without assessing the situation holistically, this is inconclusive. Rulesets might be wrongly set or firewall is wrongly configured, then the DPI firewall is insecure. If the connecting components are in a restricted and lock down environment, a DPI firewall is overkill and won’t contribute to enhance more security. By the same token, media always exaggerate cyber threats. We must judge if such threat scenarios are likely in our environment rather than blindly doing unnecessary lock down on existing systems. An example is the ransomware attack via inactive user account thru VPN without 2-factor authentication, or authenticated users via PrintNightmare exploit. Something must be done but not to complete today. Security enhancement must be assessed, managed rather than in a piecemeal manner. The latter might even create more problems after blindly applying the counter-measures. Remember – action without plan is nightmare; plan without action is day dream.
Kubernetes RBAC is an efficient role-based authorization method used to provide granular access to resources in a Kubernetes cluster. However, if it is not used properly, it can easily cause a compliance catastrophe. That’s why we need RBAC tools to audit and locate risky permissions in Kubernetes. In this article, we will discuss what Kubernetes RBAC is, why it’s important to audit risky permissions, and discover the tools that can help us best in auditing risky permissions! What is RBAC? Role-Based Access Control (RBAC) is a security mechanism in which each access authorization is based on roles that are assigned to a user. With this system, it is, therefore, possible to restrict access to the resources of a Kubernetes cluster (namespaces, pods, jobs) to applications or users. In Kubernetes, RBAC policies can be used to manage the access rights of a system user (User or Group) as well as those of service accounts (Service Account). There are other ways to authorize users in Kubernetes such as ABAC (Attribute-based access control), through Webhook or Node Authorization, but the most widely used and native authorization mechanism available in the stable version is RBAC. Practically all interaction with the resources is done through its API server, which means that, in the end, everything is limited to making HTTP requests to the said server (an essential component of the master node/s or Control Panel ). Kubernetes has four RBAC -related objects that can be combined to set cluster resource access permissions. These are Role, ClusterRole, RoleBinding, and ClusterRoleBinding.To work with these objects, like all Kubernetes objects, the Kubernetes API must be used. Roles in Kubernetes In Kubernetes, there are two types of roles called Role and ClusterRole. The biggest difference between the two is that the Role belongs to a concrete namespace, while the ClusterRole is global to the cluster. So, in the case of ClusterRole, its name must be unique since it belongs to the cluster. In the case of a Role, two different namespaces can have a Role with the same name. Another difference that should be mentioned is that Role allows access to resources that are within the same namespace, while ClusterRole, in addition to being able to give access to resources in any namespace, can also give access to resources in the same namespace such as nodes among others. Now that we know the types of roles, the next thing is to know to who we can assign these roles. In this case, we have User Accounts, Service Accounts, and Groups. User accounts are accounts assigned to a particular user, while service accounts are used by processes. For example, imagine that our application needs to programmatically access resources from the cluster, for this we would use a service account. Finally, we need the “glue” that binds a role to an account (user or service) or group. There are two resources in Kubernetes for this: RoleBinding and ClusterRoleBinding. RoleBinding can reference a role that is in the same namespace, while the ClusterRoleBinding can reference any Role in any namespace and assign permissions globally. As a note, the permissions only allow access to resources, because “by default, everything is denied” and it is possible to assign several roles to the same user The only pre-requisite for using RBAC is that it is enabled on our cluster using the “ –authorization-mode=RBAC” option. We can check this using the command: What are risky RBAC permissions and how to fix them? Any permission that allows or can allow unauthorized access to the pod resources is considered risky permission. For example, if a user has edit permission they can edit their own Role and can access resources that they are not otherwise allowed to access. This can result in a compliance issue. Similarly, if old permissions are left unchecked then some users can access resources they no longer need. It is difficult and time-consuming to manually find such risky permission when you have a large number of Roles. To make this process there are a number of RBAC permissions audit tools that help to scan your whole Cluster to locate any risky permissions. It’s also important to understand that the effectiveness of RBAC depends on an up-to-date RBAC policy that in turn requires regular permission auditing. Following are some of the best RBAC tools to audit permissions based on different languages and user interfaces. KubiScan is a Python-based RBAC tool for scanning risky permissions in a Kubernetes cluster. The tool has to be executed within the Master node and then it can be run directly from the terminal to give a list of risky permissions. Kubiscan can be used to find risky Roles, ClusterRoles, RoleBindings, ClusterBindings, Subjects, Pods, and even Containers. Krane by Appvia is a Ruby-based Kubernetes RBAC static analysis and visualization tool. It can be run both locally as a CLI or in a CI/CD pipeline. Moreover, it can also work as a standalone service within a Kubernetes container. Krane gives the feature to analyze RBAC permissions through faceted tree and graph network views. It also gives alerts for any risky permissions through its Slack integration. RBAC Tool by InsightCloudSec is a standalone permission auditing tool built with Go. It does not just allow you to scan and highlight risky RBAC permissions but it also lets you generate RBAC policy from permissions audit through its Auditgen feature. RBAC tool also offers the feature of RBAC visualization. Fairwinds Insight is a standalone tool that provides a number of Kubernetes security and compliance features. Its policy enforcement feature allows you to audit RBAC permissions and scan them against standard and customized policies. Fairwinds offers an on-demand demo. Kubernetes RBAC is an efficient way to manage access to resources in a Kubernetes cluster. However, if not implemented properly it can lead to security and compliance issues. This, however, can be avoided by continuously auditing permissions through RBAC auditing tools. You may also be interested in Kubernetes Best Practices.
Using honeynets to learn more about Bots Ever wonder why your computer is running so slowly? Maybe it's been taken over by bots. Bots are the malicious bits of viral code that infect vulnerable machines and allow them to be contolled remotely. Once compromised, your desktop computer can be used to send spam, host websites and other files without your knowledge, attack and bring down other organization's sites, and make more bots. The Honeynet Project uses networks of deliberately vulnerable machines to attract bots (sometimes it only takes a few seconds) and trap them, observing their behavior and communications with selective firewall filters. Know your Enemy is the result of four year's research into the behavior of bots and the people who write and control them. It includes a taxonomy of common bots and botnets, as well as stories about their uses and interactions. Networks of bots can be hired to perform Denial of Service attacks, sending thousands of requests to a webserver at once in order to crash it, botnets can be taken over and assimilated by rival botnets, communications between bots in a network and their controllers take place via Internet Relay Chat and can be monitored and exploited. Know Your Enemy is a fascinating and readable look into this virulent and growing ecosystem: See also: Cory Doctorow's All Complex Ecosystems have Parasites. He suggests that this kind of unpredictable and sometimes potentially dangerous growth is central to the way that software systems expand their functionality.
Artificial intelligence is a complex system of algorithms on neural networks which allows a machine to reproduce typically human processes, like reasoning and learning, in order to make decisions in terms of planning or creativity. Applying artificial intelligence to cyber security means, first of all, adopting an automatic system to update the blacklists used by DNS filtering. With over 1.7 billion active websites on the web and over 50 thousand new domains added every day, on average, a categorization that makes use of advanced systems is fundamental in order to grant a high level of protection. FlashStart’s Artificial Intelligence can examine up to 200 thousand websites every day, supports 24 languages, and recognizes 90 categories of websites based upon their content. Moreover, innovative techniques of machine learning allow the platform to learn from previous experiences, from similar behaviors, from warnings on official websites, and from potential human corrections. It is a real, data gathering process that goes beyond the identification of categories by content and formulates predictions with increasing accuracy. Currently, FlashStart’s artificial intelligence boasts a 90 percent rate of correctly predicted domains, which only excludes the sites with insignificant content or unusual languages and, therefore, hardly dangerous. With more than 190 million sites already surveyed, to which new analyses are added daily as a result of visits from users all over the world, Flashstart offers innovative and effective cyber threat intelligence (malware, ransomware, botnets, phishing, etc.) and easily allows for the customization of content accessible to end users, based on business, educational, and family needs.
The keystone to good security hygiene is limiting your attack surface. Attack surface reduction is a technique to remove or constrain exploitable behaviors in your systems. In this blog, we discuss the two attack surface reduction rules introduced in the most recent release of Windows and cover suggested deployment methods and best practices. Software applications may use known, insecure methods, or methods later identified as useful for malware exploits. For example, macros are an old and powerful tool for task automation. However, macros can spawn child processes, invoke the Windows API, and perform other tasks which render them exploitable by malware. Windows Defender Advanced Threat Protection (Windows Defender ATP) enables you to take advantage of attack surface reduction rules that allow you to control exploitable threat vectors in a simple and customizable manner. In previous releases of Windows we launched rules that let customers disallow remote process creation through WMI or PSExec and block Office applications from creating executable content. Other rules include the ability to disable scripts from creating executable content or blocking file executions unless age and prevalence criteria are met. The latest attack surface reduction rules in Windows Defender ATP in latest re based on system and application vulnerabilities uncovered by Microsoft and other security companies. Below we describe that these rules do. More importantly, we outline recommendations for deploying these rules in enterprise environments. Block Office communication apps from creating child processes The Block Office Communication Applications from Creating Child Processes rule protects against attacks that attempt to abuse the Outlook email client. For example, in late 2017 Sensepost demonstrated the DDEAUTO attack, which was later discovered to be applicable to Outlook as well. In this case, this attack surface reduction rule disables the creation of another process from Outlook this means that DDE still works and data can be exchanged by two running applications, but new processes cannot be created. It is important to note that DDE, and DDEAUTO, are legacy, inter-process communication features available since 1987. Many line-of-business applications rely on this capability. If, for example, DDE is not used in your organization, or if you want to restrict the capability of DDE to already running processes, this can be configured by using the AllowDDE registry key for Office. While rare, if your organizations applications utilize creating child processes from within Office communication applications, this attack surface reduction rule provides protection by allowing legitimate processes with exclusions. By limiting child processes that can be launched by Outlook to only processes with well-defined functionality, this attack surface reduction rule confines a potential exploit or a social engineering threat from further infecting or compromising the system. Block Adobe Reader from creating child processes The second rule weve introduced, Block Adobe Reader from Creating Child Processes limits the ability of a threat in a malicious PDF file from launching additional payloads, either embedded in a PDF file or downloaded by a threat, irrespective of how the malicious code in the PDF gained code execution either by social engineering or by exploiting an unknown vulnerability. While there may be legitimate business reasons for a business PDF file to create a child process through scripting, this is a behavior that should be discouraged as it is prone to misuse. Our data indicates few legitimate applications utilize this technique. The Block Adobe Reader from Creating Child Processes rule disables child process creation in PDF content except for those files excluded by the IT administrator. Recommendations on exclusions and deployment Attack surface reduction rules close frequently used and exploitable behaviors in the operating system and in apps. However, legitimate line-of-business and commercial applications have been written utilizing these same behaviors. To enable non-malicious applications critical to your business, exclusions can be used if they are flagged as violating an attack surface reduction rule. Core Microsoft components, such as operating system files or Office applications, reside in a global exclusion list maintained as part of Defender. These do not need exclusions. Exclusions, when applied, are honored by other Windows Defender ATP exploit mitigation features including Controlled folder access and Network protection, in addition to attack surface reduction rules. This simplifies exclusion management and standardizes application behavior. Attack surface reduction rules have three settings: off, audit, and block. Our recommended practice to deploy attack surface reduction rules is to first implement the rule in audit mode. Audit mode will identify exploitable behavior use but will not block the behavior. With audit, if you have a line of business application utilizing a behavior that is exploitable, the invoking application can be identified, and an exclusion added. Rules can be enabled in audit with Group Policy, SCCM, or PowerShell. You can review the audited events with Advanced hunting and Alert investigation in Windows Defender Security Center; by creating a custom view in Windows Event Viewer; or using automated log aggregation tools like SIEM. When audit telemetry reveals that line-of-business applications are no longer being impacted by the attack surface reduction rule, the attack surface reduction rule setting can be switched to block. This will protect against malware exploitation of the behavior. For larger enterprises, Microsoft recommends deploying attack surface reduction rules in rings. Rings are groups of machines radiating outward like non-overlapping tree rings. When the inner ring is successfully deployed with required exclusions, the next ring can be deployed. One of the ways you can create a ring process is by creating specific groups of users or devices in Intune or with a Group Policy management tool. Monitor attack surface reduction event telemetry Once a rule is deployed in block mode, it is important to monitor corresponding event telemetry. This data contains important information. For example, an application update may now require an exclusion or multiple alerts from a user clicking on email executable attachments can indicate additional training is required. Attack surface reduction rule events may be from a single, random malware breach, or your organization may be the object of a new, persistent attack attempting to utilize a vector covered by attack surface reduction rules suddenly producing a large increase in related attack surface reduction-rule block events. Where to get more information and support If you havent deployed any attack surface reduction rules, take a look at our documentation and discover how you can better protect your enterprise. Minimizing your attack surface can yield large paybacks in decreased threat vulnerability and in allowing the security operations team to focus on other threat vectors. As with all security features, enable attack surface reduction rules in a methodical, controlled manner that allows legitimate business applications to be excluded from analysis. Peter Thayer and Iaan DSouza-Wiltshire (@IaanMSFT) Windows Defender ATP Talk to us The post Recommendations for deploying the latest Attack surface reduction rules for maximum impact appeared first on Microsoft Secure.
You can't prevent the user from modifying the file. It's their computer, so they can do whatever they want (that's why the whole DRM issue is… difficult). Since you said you're using the file to save an high-score, you have a couple of alternatives. Do note that as previously said no method will stop a really determined attacker from tampering with the value: since your application is running on the user computer he can simply decompile it, look at how you're protecting the value (gaining access to any secret used in the process) and act accordingly. But if you're willing to decompile an application, find out the protection scheme used and come up with a script/patch to get around it only to change a number only you can see, well, go for it? Obfuscate the content This will prevent the user from editing the file directly, but it won't stop them as soon as the obfuscation algorithm is known. var plaintext = Encoding.UTF8.GetBytes("Hello, world."); var encodedtext = Convert.ToBase64String(plaintext); Save the ciphertext to the file, and reverse the process when reading the file. Sign the content This will not prevent the user from editing the file or seeing its content (but you don't care, an high-score is not secret) but you'll be able to detect if the user tampered with it. var key = Encoding.UTF8.GetBytes("My secret key"); using (var algorithm = new HMACSHA512(key)) var payload = Encoding.UTF8.GetBytes("Hello, world."); var binaryHash = algorithm.ComputeHash(payload); var stringHash = Convert.ToBase64String(binaryHash); Save both the payload and the hash in the file, then when reading the file check if the saved hash matches a newly computed one. Your key must be kept secret. Encrypt the content Leverage .NET's cryptographic libraries to encrypt the content before saving it and decrypt it when reading the file. Please take the following example with a grain of salt and spend due time to understand what everything does before implementing it (yes, you'll be using it for a trivial reason, but future you — or someone else — may not). Pay special attention on how you generate the IV and the key. // The initialization vector MUST be changed every time a plaintext is encrypted. // The initialization vector MUST NOT be reused a second time. // The initialization vector CAN be saved along the ciphertext. // See https://en.wikipedia.org/wiki/Initialization_vector for more information. var iv = Convert.FromBase64String("9iAwvNddQvAAfLSJb+JG1A=="); // The encryption key CAN be the same for every encryption. // The encryption key MUST NOT be saved along the ciphertext. var key = Convert.FromBase64String("UN8/gxM+6fGD7CdAGLhgnrF0S35qQ88p+Sr9k1tzKpM="); using (var algorithm = new AesManaged()) algorithm.IV = iv; algorithm.Key = key; using (var memoryStream = new MemoryStream()) using (var encryptor = algorithm.CreateEncryptor()) using (var cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write)) using (var streamWriter = new StreamWriter(cryptoStream)) ciphertext = memoryStream.ToArray(); // Now you can serialize the ciphertext however you like. // Do remember to tag along the initialization vector, // otherwise you'll never be able to decrypt it. // In a real world implementation you should set algorithm.IV, // algorithm.Key and ciphertext, since this is an example we're // re-using the existing variables. using (var memoryStream = new MemoryStream(ciphertext)) using (var decryptor = algorithm.CreateDecryptor()) using (var cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read)) using (var streamReader = new StreamReader(cryptoStream)) // You have your "MySuperSecretHighScore" back. var plaintext = streamReader.ReadToEnd();
Network forensics is the process of capturing, storing and analyzing activity that takes place on a computer network. While it’s often associated with solving network security breaches, the practice can also help solve far more common network issues, like spikes in utilization, drops in VoIP call quality, identifying rogue activity, and improving both network and application performance. In this slideshow, WildPackets, provider of network and application performance analysis solutions, explains the basics of network forensics and how it can be used to improve network performance at all organizations. Experience shows that organizations that manage GRC as an integrated program — involving people, processes and technologies — are more successful in delivering value to their organizations ... More >> When phone calls, video conference information, pictures, chat logs, etc. are all stored in a central location via social media, a potential hacker has access to just about everything, quickly and easily. ... More >> Unearth the real story behind five commonly held myths about distributed denial-of-service attacks. ... More >>
"If code has not been reviewed for security holes, the likelihood that the application has problems is virtually 100%." This is a shrewd message on the first pages of the OWASP Code Review Guide. An organization that does not put the code it uses and develops under review is irresponsible with its assets and those of its customers or users. Security problems in its products can be exploited by cybercriminals, leading to data breaches or disruption of operations and consequent fines and loss of clients and reputation. To help prevent all of this, it's prudent to match software development from the outset with a secure code review. What is secure code review? Secure code review is the examination of an application's source code to identify security flaws or vulnerabilities. These appear in the software development lifecycle (SDLC) and must be closed or fixed to strengthen the security of the code. Secure code review can take place at any point in the SDLC, but within the DevSecOps culture, it is most valuable to use it from the early stages. This is a procedure that can be performed either manually or automatically. The manual secure code review is conducted with great attention to detail. One or more security analysts scrutinize each line of code, understanding what they are evaluating, keeping in mind its use and context, the developer's intentions, and the business logic. On the other hand, the automated secure source code review is a process in which more code is examined in less time but in which the above factors are not considered. The tools work with a predefined set of rules, are restricted to certain types of vulnerabilities and suffer, some more than others, from the defect of reporting false positives (saying that something is a vulnerability when it is not). Among the most commonly used methods in secure code review are Static Application Security Testing (SAST) and Software Composition Analysis (SCA). (Understand in this previous blog post how one differs from the other.) The best option for achieving a robust and secure code review is to take manual and automated reviews and merge them to leverage their particular capabilities. Automated secure code review tools, with their quick and "superficial" assessment of the attack surface, make it easier for security analysts to focus on identifying more complex and business-critical vulnerabilities. Experts, especially ethical hackers, from the threat actors' perspective, can review code to recognize the security issues that contribute most to the risk exposure of the target of evaluation. (For example, in our latest annual State of Attacks report, we shared that 67.4% of the total risk exposure in the assessed systems was reported by the manual method). This makes it possible for vulnerability remediation, an action that should always be connected to the code review, to follow a prioritization. Ultimately, the idea is to reduce the number of flaws that go into production as much as possible but continually repair the most dangerous first. A successful development team, committed to the security of its products, always has secure code review as a pillar. Any organization that develops software should have it among its constant practices, from the early stages of the SDLC, paying attention to the small changes that the members of its team gradually make to the code. Security in general and common weaknesses in software and their exploitation are not usually taught to developers in their academies and workplaces. And even the most experienced developers, due to factors such as burnout or carelessness, can make coding mistakes and end up generating vulnerabilities such as those listed in the OWASP Top 10 and CWE Top 25. For reasons such as these, source code should usually remain under review by security experts. Secure code review identifies the absence of safe coding practices, lack of appropriate security controls, and violation of compliance standards such as PCI DSS and HIPAA. Secure code review providers may find, for instance, missing or erroneous validation of inputs (verification that they comply with specific characteristics) coming from different sources that interact with the application (e.g., users, files, data feeds). They may discover that a developer made the mistake of leaving confidential information (e.g., tokens, credentials) inside the code, having forgotten to remove it after putting it there without reasonable justification. They may see that information that does need to be stored and transferred doesn't pass through proper encryption algorithms. Likewise, they may find that user authentication processes are pretty weak, requiring, for example, short passwords with little variety in their characters. And that authorization controls are poor and end up giving unnecessary access to any user without requesting permission. An important issue often discovered within secure code review (with the help of, for example, the SCA method) is vulnerabilities within third-party and open-source software components. Application development today heavily depends on such components, which are imported from various sources and serve as support for what is intended to be built, which often turns out to have little originality. The dependency also exists between some components with others. So when using one of them, the developer may not be aware of the relation of this one with the others. Cybercriminals have among their desired targets these dependencies and components to look for vulnerabilities to exploit. This is such a frequent problem that, in fact, as we reported in State of Attacks, the most common security issue among the evaluated systems was "Use of software with known vulnerabilities," and the requirement whose violation amounted to the highest total exposure was "Verify third-party components." For secure coding practices, we recommend you review the OWASP Code Review Guide with your development team. What are the benefits of secure code review? Secure code review is part of a preventive approach, which should be addressed first, rather than a reactive approach. Applying this method as soon as the first lines of code are written makes it possible to identify and remediate vulnerabilities before going into production so as not to patch the application continuously. Staying one step ahead of malicious hackers and blocking in the code any possible entry for improper uses, even simple shenanigans, is undoubtedly a very effective strategy to reduce the likelihood of catastrophes caused by cyberattacks. Secure code review allows the number of errors or vulnerabilities found in the final stages of the SDLC, through procedures such as manual penetration testing, to be lower. Therefore, the time developers have to spend on remediation processes in these stages can also be reduced. Fixing a large number of vulnerabilities shortly before going into production becomes a thorn on the developers' side. Always keep in mind that it is easier and less expensive to do code fixes in the development environment than in production. With a continuous secure code review, you are closer to the cause of the problem and can fix it immediately, avoiding any buildup. Thanks to an early secure code review, developers can start to assume a commitment not only to remedy the security issues identified in their products but also to make their results better every day. This can be a chain process. Certain groups of developers, with the help of the security teams and their tests or reviews, can pass on knowledge, inspire others to improve their practices and productivity and make the transition to a mindset in which everyone in the organization is responsible for security. Those security missteps that so often gave rise to vulnerabilities can become less frequent over time. Organizations that decide to implement secure code review in their software development processes recognize the responsibility to comply with established standards in their industries. They seek to offer products and services that guarantee security for their operations, data and other resources, mainly those of their customers or users. In this way, they succeed in generating trust and reflecting commitment and quality. This positively affects their competitiveness and helps them to maintain a strong reputation. Fluid Attacks' Secure Code Review solution While a team of developers can do their own code reviews, such as when a developer asks a teammate to peer review their build to avoid logical or stylistic errors, it is recommended that, in security issues, experts in the field be involved. Review by an external agent can ensure that all flaws are reported while maintaining an unbiased view. we offer our Secure Code Review solution as a comprehensive and accurate review of your software source code, combining manual and automatic procedures based on methods such as SAST you can apply secure code review from the earliest stages of your SDLC in a continuous manner. You can solve your security issues promptly (prioritizing those that represent the highest risk exposure) in favor of your development team's productivity and the security of your products. Do not hesitate to contact us if you want more information about our Secure Code Review and other solutions in our Continuous Hacking service. Click here to try our Continuous Hacking Machine Plan free for 21 days. Recommended blog posts You might be interested in the following related posts. Definition, implementation, importance and alternatives Keep tabs on this proposal from the Biden-Harris Admin Vulnerability scanning and pentesting for a safer web Definitions, classifications and pros and cons Is your security testing covering the right risks? How this process works and what benefits come with it Get an overview of vulnerability assessment Benefits of continuous over point-in-time pentesting
The HTTP DNT Header is a request header that allows users to choose if their activity could be tracked by each server and web application that they communicate with via HTTP. The generated header field is a mechanism that allows the user to opt-in or out of the tracking. Tracking allows user to experience personalized content on web. The option to opt-out of tracking was created with growing privacy demands among users. The tracking preference can only be set if a user has enabled it. A user agent is not allowed to display a tracking preference expression if not enabled tracking preference is set by the user. The following field value is generated for HTTP DNT header field if the tracking preference is set as enabled - 1: This directive indicates that user prohibits tracking at the target site. - 0: This directive indicates that user allows tracking on or the user has granted an exception at the given target site. Note : A DNT Header Field can have zero or more extensions. The extensions are determined by the user agent. A DNT header field can be inserted without a field value if the extension is defined but the tracking preference is not set. - This is a example from W3C(World Wide Web Consortium) of a DNT header set to field value – 1: GET /something/here HTTP/1.1 Host: example.com DNT: 1 console.log(navigator.doNotTrack); // prints "1" if DNT is enabled; "0" // if the user opted-in for tracking; // prints "null" if unspecified - Safari 7.1.3+ , Edge , IE11 and subsequent versions use window.doNotTrack rather than navigator.doNotTrack - Prior to Firefox 32, navigator.doNotTrack would report values of yes and no rather than 1 and 0. Supported Browsers: The browsers supported by HTTP DNT Header are listed below - Google Chrome - Internet Explorer - Microsoft Edge
What are HTTP security headers? HTTP security headers are a subset of HTTP headers that is related specifically to security. They are exchanged between a client (usually a web browser) and a server to specify the security details of HTTP communication. There are also other HTTP headers that, although not directly related to privacy and security, can also be considered HTTP security headers. Setting suitable headers in your web applications and web server settings is an easy way to greatly improve the resilience of your web application against many common attacks, including cross-site scripting (XSS) and clickjacking attacks. This post only lists the most important headers – see our white paper on HTTP security headers for a more detailed discussion of available security headers. How HTTP security headers can improve web application security When we talk about web application security on this blog, we often mean finding exploitable vulnerabilities and fixing them in application code. HTTP security headers operate on a different level, providing an extra layer of security by restricting behaviors permitted by the browser and server once the web application is running. Implementing the right headers in the right way is a crucial aspect of any best-practice application setup – but how do you choose the ones that make the biggest difference? As with other web technologies, HTTP protocol headers come and go depending on current protocol specifications and support from browser vendors. Especially in security, where de facto standards can arise and fall out of favor quite independently of official specs, it’s not unusual to find headers that were widely supported a few years ago but are deprecated today. At the same time, completely new proposals can gain universal support in a matter of months. Keeping up with the latest developments is not easy, but leading application security solutions such as Invicti can help by automatically checking for the presence and correctness of HTTP security headers and providing clear recommendations. The most important HTTP security headers First up are the three best-known and probably most important HTTP response headers that any modern web application should be setting to immediately rule out entire classes of web attacks. When enabled on the server, the HTTP Strict Transport Security header (HSTS) enforces the use of encrypted HTTPS connections instead of plain-text HTTP communication. A typical HSTS header might look like this: Strict-Transport-Security: max-age=63072000; includeSubDomains; preload This informs any visiting web browser that the site and all its subdomains use only SSL/TLS communication, and that the browser should default to accessing it over HTTPS for the next two years (the max-age value in seconds). The preload directive indicates that the site is present on a global list of HTTPS-only sites. The purpose of preloading is to speed up page loads and eliminate the risk of man-in-the-middle (MITM) attacks when a site is visited for the first time. Invicti checks if HSTS is enabled and correctly configured. The Content Security Policy header (CSP) is something of a Swiss Army knife among HTTP security headers. It lets you precisely control permitted content sources and many other content parameters and is recommended way to protect your websites and applications against XSS attacks. A basic CSP header to allow only assets from the local origin is: Content-Security-Policy: default-src 'self' Other directives include img-src to specify permitted sources for scripts, CSS stylesheets, and images. For example, if you specify script-src 'self', you are restricting scripts (but not other content) to the local origin. Among other things, you can also restrict browser plugin sources using plugin-types (unsupported in Firefox) or Invicti checks if the CSP header is present. This header was introduced way back in 2008 in Microsoft Internet Explorer to provide protection against cross-site scripting attacks involving HTML iframes. To completely prevent the current page from being loaded into iframes, you can specify: Other supported values are sameorigin to only allow loading into iframes with the same origin and allow-from to indicate specific permitted URLs. Note that nowadays, this header can usually be replaced by suitable CSP directives. Invicti checks if the X-Frame-Options header is present. Examples of deprecated HTTP security headers As already mentioned, some headers get introduced as temporary fixes for specific security issues. As web technology moves on or standards catch up, these become deprecated, often after only a few years. Here are two examples of deprecated headers that were intended to address specific vulnerabilities. As the name suggests, the X-XSS-Protection: 1; mode=block Created for browsers equipped with XSS filters, this non-standard header was intended as a way to control the filtering functionality. In practice, it was relatively easy to bypass or abuse. Since modern browsers no longer use XSS filtering, this header is now deprecated. Invicti checks if you have set X-XSS-Protection for your websites. HTTP Public Key Pinning (HPKP) was introduced in Google Chrome and Firefox to solve the problem of certificate spoofing. HPKP was a complicated mechanism that involved the server presenting clients with cryptographic hashes of valid certificate public keys for future communication. A typical header would be something like: Public-Key-Pins: pin-sha256="cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs="; max-age=5184000 In practice, public key pinning proved too complicated to use. If configured incorrectly, the header could completely disable website access for the time specified in the max‑age parameter (in the example above, this would be two months). The header was deprecated in favor of certificate transparency logs – see the Expect-CT header below. Other useful HTTP security headers While not as critical to implement as CSP and HSTS, the additional headers below can also help you harden your web applications with relatively little effort. The recommended way to prevent website certificate spoofing is to use the Expect-CT header to indicate that only new certificates added to Certificate Transparency logs should be accepted. A typical header would be: Expect-CT: max-age=86400, enforce, report-uri="https://example.com/report" enforce directive instructs clients to refuse connections that violate the Certificate Transparency policy. The optional report-uri directive indicates a location for reporting connection failures. Invicti reports missing Expect-CT headers with a Best Practice severity level. When included in server responses, this header forces web browsers to strictly follow the MIME types specified in Content-Type headers. This is specifically intended to protect websites from cross-site scripting attacks that abuse MIME sniffing to supply malicious code masquerading as a non-executable MIME type. The header has just one directive: Invicti checks if Content-Type headers are set and X-Content-Type-Options: nosniff is present. Fetch metadata headers This relatively new set of client-side headers allows the browser to inform the server about application-specific HTTP request attributes. Four headers currently exist: Sec-Fetch-Site: Specifies the intended relationship between the initiator and target origin Sec-Fetch-Mode: Specifies the intended request mode Sec-Fetch-User: Specifies if the request was triggered by the user Sec-Fetch-Dest: Specifies the intended request destination When supported by both the server and the browser, these headers provide the server with additional information about intended application behaviors to help identify and block suspicious requests. Related HTTP headers to improve privacy and security These final items are not strictly HTTP security headers but can serve to improve both security and privacy. This controls how much (if any) referrer information the browser should reveal to the web server. Typical usage would be: With this header value, the browser will only reveal its full referrer information (including the URL) for same-origin requests. For all other requests, only information about the origin is sent. Invicti reports missing Referrer-Policy headers with a Best Practice severity level. This header allows you to control the caching of specific web pages. Several directives are available, but the typical usage is simply: This prevents any caching of the server response, which can be useful for ensuring that confidential data is not retained in any caches. You can use other available directives to get more precise control over caching behavior. If you want to ensure that confidential information from your application is not stored by the browser after a user logs out, you can set the This directive will clear all browsing data related to the site. The storage directives are also available to give you more fine-grained control over what is cleared. This is an experimental header that allows you to deny access to specific browser features and APIs on the current page. It can be used to control application functionality but also to improve privacy and security. For example, if you want to deny an application permission to access the microphone and camera APIs, you can send the following header: Feature-Policy: microphone 'none'; camera 'none' Many more directives are available – see the Feature-Policy documentation on MDN for a full list. Security headers in action with Sven Morgenroth Invicti security researcher Sven Morgenroth joined Paul Asadoorian on Paul’s Security Weekly #652 to describe and demonstrate various HTTP headers related to security. Watch the full video interview and demo: Keep track of your HTTP security headers with Invicti HTTP security headers can be an easy way to improve web security and often don’t require changes to the application itself, so it’s always a good idea to use the most current headers. However, because browser vendor support for HTTP headers can change so quickly, it’s hard to keep everything up-to-date, especially if you’re working with hundreds of websites. To help you keep up and stay secure, Invicti provides vulnerability checks that include testing for recommended HTTP security headers. Invicti checks if a header is present and correctly configured, and provides clear recommendations to ensure that your web applications always have the best protection.
0 mins to read• Analytics Lab Collision Detection Tool 0 mins to read• Learn how to leverage telematics data to detect potential collisions using the Collision Detection Add-in in MyGeotab. The Collision Detection feature reports potential collisions detected by telematics data and allows fleet safety managers to monitor, act, and take measures to mitigate risks. An experimental Analytics Lab solution Version-1 Lab Release: February 2021Last document update: March 2021 Collision Detection is a tool that reports potential collisions that are detected by Geotab’s telematics data within hours. This allows fleet safety managers to monitor, act, and take measures to mitigate risks. The Collision Detection tool is one of the many available tools through the Analytics Lab, found under Dashboard & Analytics in MyGeotab. ✱ NOTE: Users on MyGeotab 2102 version can access Analytics Lab from the ‘Dashboard & Analytics’ menu in the left navigation bar. While collisions are inevitable, incorporating the Collision Detection tool can reduce risk and encourage driver safety. Commercial fleets, in particular, have a lower tolerance for collisions due to higher financial and reputational risks, so being able to review and confirm potential collisions within hours is a useful protective measure. The Collision Detection tool helps Fleet managers monitor potential collisions in their fleet vehicles by reviewing critical data points, taking action to reduce exposures, and mitigating risks of future events. ✱ NOTE: This tool is independent of the Collision Reconstruction Add-in, which provides greater details on collisions. The Collision Detection app leverages Geotab’s telematics data and reports all potential collision events in the user’s fleet database that show a high unusual acceleration reading in horizontal direction. A collision score is generated to indicate how likely the recorded acceleration event is a potential collision. An algorithm runs an analysis over each event and all events with scores 50% (previously, 60% based on the older model) or above are classified as a potential Collision, and the rest as No Collision. Due to the granularity of this data, the algorithm can report collision details such as point(s) of impact, magnitude, trigger type (stationary vs. dynamic), location, time, and other metrics and statistics. The algorithm is based on the extracted insights from the data gathered through the recorded events. It supports factors like: acceleration and speed patterns, bad installation, event location, road type, as well as behaviour around an incident to estimate a collision score (in %). These detected collisions are then ready to be reviewed by the user to confirm or deny their occurrences, and provide any additional information about the collision events. The app also provides an option to report an undetected collision event manually, which helps the algorithm improve future detections and reporting. ! IMPORTANT: Although all fleet accounts are eligible to use Collision Detection experiment, some Government and Public Sector fleets may be unable to access it. For more information, Resellers can contact their Partner Account Manager, and Customers can reach out to Support through their standard communication channel. Follow the steps below to navigate through the Collision Detection experiment and review, report, or monitor collisions. ✱ NOTE: For demonstration purposes, this document includes screenshots of the experiment utilizing an internal Geotab database. Install and launch the Add-in by navigating to Analytics Lab from the MyGeotab navigation bar. ✱ NOTE: You must have Administrator security clearance to use the Collision Detection App. On clicking the Try It icon on the catalog page, the landing page of the app will appear. By default, the App shows all collisions detected in the user’s fleet database. To search for collisions, you can filter using the Show Events dropdown or filter based on specific locations using the Search Location dropdown. Sort the data generated by Vehicle Name, Collision Score, Date, or Impact Points. Select your vehicle to view further details on the collision detected. This provides a summary of: To confirm the occurrence of the event, and to allow users to provide additional information, click Mark As Reviewed, and provide the required information. You can also click Archive to move and save the event. Once the event is marked as reviewed, you can click Edit to change any responses to the event. To report events manually, click Report a collision manually button on the landing page of the app. Provide required details in the Manual collision report, and click Confirm. A list of Collision Detection APIs can be found here. Please note that access to all Collision Detection APIs require users to review the terms and consent to the disclaimer form via the Collision Detection application in Analytics Lab, or the setUserConsent API listed in the document. As we continue to better our Add-ins, your feedback is valued. Contact us with your feedback or questions by clicking Leave Feedback and completing our experiment feedback form. If you have an idea for a new data experiment, click here and let us know! Come join our Analytics Lab Community group to connect with our data experts, learn more about data inside Geotab and our upcoming experiments. Data Product Discovery Team Initial Quick Guide Updates based on the new model
The Developer Console can look overwhelming, but it’s just a collection of tools that help you work with code. In this lesson, you’ll execute Apex code and view the results in the Log Inspector. The Log Inspector is a useful tool you’ll use often. 1. Click Debug > Open Execute Anonymous Window or CTRL+E. 2. In the Enter Apex Code window, enter the following text: System.debug( ‘Hello World’ ); Note: System.debug() is like using System.out.println() in Java (or printf() if you’ve been around a while ;-). But, when you’re coding in the cloud, where does the output go? Read on! 3. Deselect Open Log and then click Execute. Every time you execute code, a log is created and listed in the Logs panel. Double-click a log to open it in the Log Inspector. You can open multiple logs at a time to compare results. Log Inspector is a context-sensitive execution viewer that shows the source of an operation, what triggered the operation, and what occurred afterward. Use this tool to inspect debug logs that include database events, Apex processing, workflow, and validation logic. The Log Inspector includes predefined perspectives for specific uses. Click Debug > Switch Perspective to select a different view, or click CTRL+P to select individual panels. You’ll probably use the Execution Log panel the most. It displays the stream of events that occur when code executes. Even a single statement generates a lot of events. The Log Inspector captures many event types: method entry and exit, database and web service interactions, and resource limits. The event type USER_DEBUG indicates the execution of a System.debug() statement. 1. Click Debug > Open Execute Anonymous Window or CTRL+E and enter the following code: System.debug( ‘Hello World’ ); System.debug( System.now() ); System.debug( System.now() + 10 ); 2. Select Open Log and click Execute. 3. In the Execution Log panel, select Executable. This limits the display to only those items that represent executed statements. For example, it filters out the cumulative limits. 4. To filter the list to show only USER_DEBUG events, select Debug Only or enter USER in the Filter field.
According to the news report, the CSPP project, which focuses on helping developing countries create and implement climate-smart policies, was ideal for phishing attacks as it used an Extended Validation (EV) SSL certificate issued by Comodo for the World Bank Group. Since the website carried EV and SSL certificate issued for the World Bank Group, it gave the phishing website enough credibility for the visitors to easily fall for it. It is said that the certificate gives the “highest available level of trust” as it is offered after an extensive verification process. After that it displays the name of the owner. Now, the PayPal phishing site tricked the visitor into logging in with their PayPal credentials. Soon after, the data was submitted and stolen, the user was prompted that the site was unable to load the user’s account and required confirmation of their personal information. The site then required the user to share their email address, name, postal address, date of birth, and phone number. Then, it asked the user to verify their PayPal payment information, including credit card number, expiry date, its CVV number, and 3D Secure password if the card required verification. After collecting this personal and payment information, the phishing site then directed the user to the legitimate PayPal website. The phishing page was hosted on climatesmartplanning.org, the fact that the green address bar in the browser displayed “World Bank Group” might have convinced users that the page was legitimate. According to various news reports, the same CSPP website was also targeted by a different type of hacker. Although, the phishing page was removed by the CSPP webmasters, the site’s homepage was defaced by an Iraqi hacker who appears to deface random websites in an effort to boost his reputation among his peers. Today, the site’s EV certificate has been revoked.
Cloud misconfigurations: what can you do to protect your business? In the IT world, in the past, a misconfiguration was viewed as an occurrence that often originated in human error and was sometimes accepted as a price to be paid. Over time, many checks and balances were built into IT processes to prevent, detect, and recover from commonly occurring misconfigurations. Now travel with me to the present day, and the same thing continues with IT in the cloud. This may seem equally benign as earlier; but now, these misconfigurations have implications and impacts far beyond the enterprise’s network and IT infrastructure. In fact, this is well recognized by IT professionals and echoed in a 2021 report from Zimperium which indicated that unsecured cloud configurations exposed information in thousands of apps. The Rapid7 Cloud Misconfigurations Report of 68 different accounts of breaches in 2021 found a whole swathe of industries were affected, including information technology, healthcare, and public information, including the giants of industry and those from the Fortune 500 list. Cloud misconfiguration simply means not configuring cloud systems correctly, leaving them open to all and sundry. Some common examples of such misconfigurations, include but are not limited to: - Granting public access to data stores/buckets - Having poor controls on network functionality - Storing encryption passwords and keys in open repositories The outcome of these misconfigurations can be wide and deep. At the simplest level, your data and the data belonging to your customers can be exposed. This can have huge financial and reputational impacts. Misconfiguration errors also lead to data breaches, allow the deletion or modification of resources, cause service interruptions, and otherwise wreak havoc on business operations. Further, the length of time organizations take to detect a cloud configuration mistake varies widely and makes the situation even more explosive. Respondents in the 2021 Cloud Security Alliance State of Cloud Security Risk, Compliance, and Misconfigurations- survey indicated that most commonly, cloud misconfigurations are found within a day (23%) or within a week (22%). More concerning, however, was that 22% of organizations take longer than one week to even find the configuration errors, let alone resolve the misconfiguration. Now that we have indulged in some good old-fashioned fear in the triad of fear, uncertainty and doubt, or FUD, let’s take a deep breath and look at ten practical things you can do to protect the enterprise from these pesky misconfigurations: - Ensure that the cloud team has the requisite knowledge and skills on general information and cybersecurity and is specifically skilled on cloud security aspects. This can include pursuing credentials such as the Certificate of Cloud Auditing Knowledge and Cloud Fundamentals Certificate from ISACA, or the Certificate of Cloud Security Knowledge from Cloud Security Alliance. - Establish and implement cloud related security and other baselines – this is easy to say and (probably also) do and you can find the necessary support and inspiration from many sources including various vendors themselves. You can also look at the CIS benchmarks for various cloud usages and vendors. - Automate the rollout of polices on the cloud workloads where possible so that the potential for human error is minimized. - Enable automation and continuous scanning for misconfigurations to prevent security incidents. Automation enables the remediation of issues in real time so that the vulnerabilities are quickly fixed. - Assess the compliance status on an on-going basis so that deviations and other missteps are identified as close to the point of occurrence as possible and can be remediated before the outcome ends up in the press. - Build an appropriate system of checks and balances. This includes making sure that an appropriate change management process is followed with the requisite review of changes once complete so that compliance to previously established baselines is also reviewed, and gaps fixed. - Even if this may be stating the obvious, avoid a “lift and shift” at all costs because the controls and measures that you apply for instance to a database in an on-premises model are not the same in the cloud. The public nature of the resources, the types and levels of access that may be required in the cloud will undermine all previously established on-premises controls and security measures. - Distribute responsibilities across the DevOps or application engineering teams instead of holding your IT operations and information security teams primarily responsible for detecting, monitoring, and tracking potential misconfigurations. - Aim for alignment among departments regarding security policies and enforcement strategies and try to move toward a DevSecOps approach so that there is improved interdepartmental alignment on security policies and enforcement, which is crucial for proactive security. - Last but not least, do everything you can to combat shadow IT, which is very prevalent when it comes to enterprises consuming SaaS services. Organizations don’t have to accept cloud misconfigurations as inevitable, and by taking some proactive steps, they can avoid or mitigate these misconfigurations and their negative impacts. (The author Mr. R.V. Raghu, Director at Versatilist Consulting India Pvt Ltd, and ISACA Ambassador in India and the views expressed in this article are his own)
Prerequisite : Security Testing The Interactive Application Security Test (IAST) is a new generation of vulnerability analysis technology which can effectively solve the technical gaps of the various sites represented by the e-commerce platform. This technology combines Static Application Security Testing (SAST) with Dynamic Application Security Testing (DAST) using a unique design context association mechanism. IAST integrates the advantages of SAST and DAST technology, and it continuously detects and identifies weaknesses in applications. Interactive Application Security Testing : Interactive Application Security Testing is a new generation and advanced testing method which is used for identification and management of security risks associated with a running web application. That’s why it is also called as Run time testing and uses a lot of dynamic testing techniques. It keeps eye on the running software and monitors it’s running and gather information of its performance with the help of special software tools. So, in real time it analyzes the software. Benefits of IAST : It generally occurs during the testing/quality assurance phase of the Software Development Life Cycle (SDLC) so problems are detected early in the development cycle, reducing treatment costs and delays. Several tools can be integrated into the Continuous Integration (CI) and Continuous Development (CD) tools. - IAST provides accurate results for a fast sort where the DAST tools often generate many false positives but do not specify lines of code for the vulnerabilities. - IAST Precisely identifies the source of the vulnerabilities by allowing developers to quickly identify and fix the source of the specific vulnerability. - IAST Easily integrates into CI/CD, and it is the only type of dynamic testing technology that integrates seamlessly into CI / CD pipelines. Basic step to operate this effectively : - Deploy DevOps to check and monitor integration into a CI / CD environment. - Choose tools that can perform code reviews of applications written in the programming languages. - Establish the infrastructure for the survey and deploy the tool. - Set up access control and authorization and any required integrations, such as Jira for bug tracking, to deploy the tool. - Customize the tool. Refine the tool to suit the needs of the organization. - Set priorities and add applications. If multiple apps are there, prioritize high-risk web apps to scan first. - Train the development and security teams on effectively using the results from the IAST tool. Here are the main advantages of using IAST : - False positives : IAST provides an interactive test that takes advantage of more data and leads to better and more accurate discoveries. Less false positives. - Covering vulnerabilities : IAST enables to create custom rules and customize a threat coverage strategy according to specific organizations and industries. - Code Coverage : Interactive testing technology can fully scan the application, providing much better coverage. - Scalability : Interactive testing tools can handle any size of application, including large operations. - Instant feedback : Interactive test tools provide instant feedback. What should you look for in the IAST tool : - The web APIs that enable DevOps incorporate testing into designs for Jenkins and other enterprise tools. - Jira native integration for bug tracking and incorporation into other development tools, quality assurance and testing - Compliance with any type of test method – current automation tests, manual quality assurance / development tests, automated web crawlers, unit testing, etc. - Real-time analysis results at low false positive rates out of the box - The ability to expand in a large enterprise environment. - Fully automated, Docker-based, or manual post forms - Support for standardized architecture based on microservices and cloud-based applications. Share your thoughts in the comments Please Login to comment...
Ransomware is a form of malware that is especially nasty. When you are under a ransomware attack your data is literally held hostage. Most times […] What’s A Honeypot? TO CATCH A HACKER Honeypots are used to trap would be attackers! Why is this is important? Because if you know how attackers get into […] Intrusion Prevention Vs Intrusion Detection Both a NIDS AND NIPS safeguard against intrusions. The major difference is that a NIDS alerts you to the intrusion and a NIPS tries to […]
Person re-identification (Re-ID) is the task of identifying the same person in non-overlapping cameras. This task has attracted extensive research interest due to its significance in surveillance and public security. State-of-the-art Re-ID performance is achieved mainly by fully supervised methods Sun et al. (2018); Chen et al. (2019). These methods need sufficient annotations that are expensive and time-consuming to attain, making them impractical in real-world deployments. Therefore, more and more recent studies focus on unsupervised settings, aiming to learn Re-ID models via unsupervised domain adaptation (UDA) Wei et al. (2018b); Qi et al. (2019b); Zhong et al. (2019) or purely unsupervised Lin et al. (2019); Li et al. (2018); Wu et al. (2019b) techniques. Although considerable progress has been made in the unsupervised Re-ID task, there is still a large gap in performance compared to the supervised counterpart. This work addresses the purely unsupervised Re-ID task, which does not require any labeled data and therefore is more challenging than the UDA-based problem. Previous methods mainly resort to pseudo labels for learning, adopting Clustering Lin et al. (2019); Zeng et al. (2020), k-nearest neighbors (k-NN) Li et al. (2018); Chen et al. (2018), or graph Ye et al. (2017); Wu et al. (2019b) based association techniques to generate pseudo labels. The clustering-based methods learn Re-ID models by iteratively conducting a clustering step and a model updating step. These methods have a relatively simple routine but achieve promising results. Therefore, we follow this research line and propose a more effective approach. treat each cluster as a pseudo identity class, neglecting the intra-ID variance caused by the change of pose, illumination, and camera views. When observing the distribution of features extracted by an ImageNetKrizhevsky et al. (2012)-pretrained model from Market-1501 Zheng et al. (2015), we notice that, among the images belonging to a same ID, those within cameras are prone to gather closer than the ones from different cameras. That is, one ID may present multiple sub-clusters, as demonstrated in Figure 1(b) and (c). The above-mentioned phenomenon inspires us to propose a camera-aware proxy assisted learning method. Specifically, we split each single cluster, which is obtained by a camera-agnostic clustering method, into multiple camera-aware proxies. Each proxy represents the instances coming from the same camera. These camera-aware proxies can better capture local structures within IDs. More important, when treating each proxy as an intra-camera pseudo identity class, the variance and noise within a class are greatly reduced. Taking advantage of the proxy-based labels, we design an intra-camera contrastive learning Chen et al. (2020) component to jointly tackle multiple camera-specific Re-ID tasks. When compared to the global Re-ID task, each camera-specific task deals with less number of IDs and smaller variance while using more reliable pseudo labels, and therefore is easier to learn. The intra-camera learning enables our Re-ID model to effectively learn discrimination ability within cameras. Besides, we also design an inter-camera contrastive learning component, which exploits both positive and hard negative proxies across cameras to learn global discrimination ability. A proxy-balanced sampling strategy is also adopted to select appropriate samples within each mini-batch, facilitating the model learning further. In contrast to previous clustering-based methods, the proposed approach distinguishes itself in the following aspects: Instead of using camera-agnostic clusters, we produce camera-aware proxies which can better capture local structure within IDs. They also enable us to deal with large intra-ID variance caused by different cameras, and generate more reliable pseudo labels for learning. With the assistance of the camera-aware proxies, we design both intra- and inter-camera contrastive learning components which effectively learn ID discrimination ability within and across cameras. We also propose a proxy-balanced sampling strategy to facilitate the model learning further. Extensive experiments on three large-scale datasets, including Market-1501 Zheng et al. (2015)et al. (2017), and MSMT17 Wei et al. (2018a), show that the proposed approach outperforms both purely unsupervised and UDA-based methods. Especially, on the challenging MSMT17 dataset, we gain Rank-1 and mAP improvements when compared to the second place. 2 Related Work 2.1 Unsupervised Person Re-ID According to whether using external labeled datasets or not, unsupervised Re-ID methods can be grouped into purely unsupervised or UDA-based categories. Purely unsupervised person Re-ID does not require any annotations and thus is more challenging. Existing methods mainly resort to pseudo labels for learning. Clustering Lin et al. (2019); Zeng et al. (2020), k-NN Li et al. (2018); Chen et al. (2018), or graph Ye et al. (2017); Wu et al. (2019b) based association techniques have been developed to generate pseudo labels. Most clustering-based methods like BUC Lin et al. (2019) and HCT Zeng et al. (2020) perform in a camera-agnostic way, which can maintain the similarity within IDs but may neglect the intra-ID variance caused by the change of camera views. Conversely, TAUDL Li et al. (2018), DAL Chen et al. (2018), and UGA Wu et al. (2019b) divide the Re-ID task into intra- and inter-camera learning stages, by which the discrimination ability learned from intra-camera can facilitate ID association across cameras. These methods generate intra-camera pseudo labels via a sparse sampling strategy, and they need a proper way for inter-camera ID association. In contrast to them, our cross-camera association is straightforward. Moreover, we propose distinct learning strategies in both intra- and inter-camera learning parts. Unsupervised domain adaptation (UDA) based person Re-ID requires some source datasets that are fully annotated, but leaves the target dataset unlabeled. Most existing methods address this task by either transferring image styles Wei et al. (2018b); Deng et al. (2018a); Liu et al. (2019) or reducing distribution discrepancy Qi et al. (2019b); Wu et al. (2019a) across domains. These methods focus more on transferring knowledge from source to target domain, leaving the unlabeled target datasets underexploited. To sufficiently exploit unlabeled data, clustering Fan et al. (2018); Zhai et al. (2020); Ge et al. (2020b) or k-NN Zhong et al. (2019) based methods have also been developed, analogous to those introduced in the purely unsupervised task. Differently, these methods either take into account both original and transferred data Fan et al. (2018); Zhong et al. (2019); Ge et al. (2020b), or integrate a clustering procedure together with an adversarial learning step Zhai et al. (2020). 2.2 Intra-Camera Supervised Person Re-ID Intra-camera supervision (ICS) Zhu et al. (2019); Qi et al. (2020) is a new setting proposed in recent years. It assumes that IDs are independently labeled within each camera view and no inter-camera ID association is annotated. Therefore, how to effectively perform the supervised intra-camera learning and the unsupervised inter-camera learning are two key problems. To address these problems, various methods such as PCSL Qi et al. (2020), ACAN Qi et al. (2019a), MTML Zhu et al. (2019), MATE Zhu et al. (2020), and Precise-ICS Wang et al. (2021) have been developed. Most of these methods pay much attention to the association of IDs across cameras. When taking camera-aware proxies as pseudo labels, our work shares a similar scenario in the intra-camera learning with these ICS methods. Differently, our inter-camera association is straightforward due to the proxy generation scheme. We therefore focus more on the way to generate reliable proxies and conduct effective learning. Besides, the unsupervised Re-ID task tackled in our work is more challenging than the ICS problem. 2.3 Metric Learning with Proxies Metric learning plays an important role in person Re-ID and other fine-grained recognition tasks. An extensively utilized loss for metric learning is the triplet loss Hermans et al. (2017), which considers the distances of an anchor to a positive instance and a negative instance. Proxy-NCA Movshovitz-Attias et al. (2017) proposes to use proxies for the measurement of similarity and dissimilarity. A proxy, which represents a set of instances, can capture more contextual information. Meanwhile, the use of proxies instead of data instances greatly reduces the triplet number. Both advantages help metric learning to gain better performance. Further, with the awareness of intra-class variances, Magnet Rippel et al. (2016), MaPML Qian et al. (2018), SoftTriple Qian et al. (2019) and and GEORGE Sohoni et al. (2020) adopt multiple proxies to represent a single cluster, by which local structures are better represented. Our work is inspired by these studies. However, in contrast to set a fixed number of proxies for each class or design a complex adaptive strategy, we split a cluster into a variant number of proxies simply according to the involved camera views, making our proxies more suitable for the Re-ID task. 3 A Clustering-based Re-ID Baseline We first set up a baseline model for the unsupervised Re-ID task. As the common practice in the clustering-based methods Fan et al. (2018); Lin et al. (2019); Zeng et al. (2020), our baseline learns a Re-ID model iteratively and, at each iteration, it alternates between a clustering step and a model updating step. In contrast to these existing methods Fan et al. (2018); Lin et al. (2019); Zeng et al. (2020), we adopt a different strategy in the model updating step, making our baseline model more effective. The details are introduced as follows. Given an unlabeled dataset , where is the -th image and is the image number. We build our Re-ID model upon a deep neural networkwith parameters . The parameters are initialized by an ImageNet Krizhevsky et al. (2012)-pretrained model. When image is input, the network performs feature extraction and outputs feature . Then, at each iteration, we adopt DBSCAN Ester et al. (1996) to cluster the features of all images, and further select reliable clusters by leaving out isolated points. All images within each cluster are assigned with a same pseudo identity label. By this means, we get a labeled dataset , in which is a generated pseudo label. is the number of images contained in the selected clusters and is the cluster number. Once pseudo labels are generated, we adopt a non-parametric classifierWu et al. (2018) for model updating. It is implemented via an external memory bank and a non-parametric Softmax loss. More specifically, we construct a memory bank , where is the feature dimension. During back-propagation when the model parameters are updated by gradient descent, the memory bank is updated by where is the -th entry of the memory, storing the updated feature centroid of class . Moreover, is an image belonging to class and is an updating rate. Then, the non-parametric Softmax loss is defined by where is a temperature factor. This loss achieves classification via pulling an instance close to the centroid of its class while pushing away from the centroids of all other classes. This non-parametric loss plays a key role in recent contrastive learning techniques Wu et al. (2018); Zhong et al. (2019); Chen et al. (2020); He et al. (2019), demonstrating a powerful ability in unsupervised feature learning. 4 The Camera-aware Proxy Assisted Method Like previous clustering-based methods Fan et al. (2018); Lin et al. (2019); Zeng et al. (2020); Zhai et al. (2020), the above-mentioned baseline model conducts clustering in a camera-agnostic way. This clustering way may maintain the similarity within each identity class, but neglect the intra-ID variance. Considering that most severe intra-ID variance is caused by the change of camera views, we split each single class into multiple camera-specific proxies. Each proxy represents the instances coming from the same camera. The obtained camera-aware proxies not only capture the variance within classes, but also enable us to divide the model updating step into intra- and inter-camera learning parts. Such a divide-and-conquer strategy facilitates our model updating. The entire framework is illustrated in Figure 2, in which the modified clustering step and the improved model updating step are alternatively iterated. More specifically, at each iteration, we split the camera-agnostic clustering results into camera-aware proxies, and generate a new set of pseudo labels that are assigned in a per-camera manner. That is, the proxies within each camera view are independently labeled. It also means that two proxies split from the same cluster may be assigned with two different labels. We denote the newly labeled dataset of the -th camera by . Here, image , which previously is annotated with a global pseudo label , is additionally annotated with an intra-camera pseudo label and a camera label . and are, respectively, the number of images and proxies in camera , and is the number of cameras. Then, the entire labeled dataset is . Consequently, we construct a proxy-level memory bank , where is the total number of proxies in all cameras. Each entry of the memory stores a proxy, which is updated by the same strategy as introduced in Eq. (1) but considers only the images belonging to the proxy. Based on the memory bank, we design an intra-camera contrastive learning loss that jointly learns per-camera non-parametric classifiers to gain discrimination ability within cameras. Meanwhile, we also design an inter-camera contrastive learning loss , which considers both positive and hard negative proxies across cameras to boost the discrimination ability further. 4.1 The Intra-camera Contrastive Learning With the per-camera pseudo labels, we can learn a classifier for each camera and jointly learn all the classifiers. This strategy has the following two advantages. First, the pseudo labels generated from the camera-aware proxies are more reliable than the global pseudo labels. It means that the model learning can suffer less from label noise and gain better intra-camera discrimination ability. Second, the feature extraction network shared in the joint learning is optimized to be discriminative in different cameras concurrently, which implicitly helps the Re-ID model to gain cross-camera discrimination ability. Therefore, we learn one non-parametric classifier for each camera and jointly learn classifiers for all cameras. To this end, we define the intra-camera contrastive learning loss as follows. Here, given image , together with its per-camera pseudo label and camera label , we set to be the total proxy number accumulated from the first to the -th camera, and to be the index of the corresponding entry in the memory. is to balance the various number of images in different cameras. This loss performs contrastive learning within cameras. As illustrated in Figure 3(a), this loss pulls an instance close to the proxy to which it belongs and pushes it away from all other proxies in the same camera. 4.2 The Inter-camera Contrastive Learning Although the intra-camera learning introduced above provides our model with considerable discrimination ability, the model is still weak at cross-camera discrimination. Therefore, we propose an inter-camera contrastive learning loss, which explicitly exploits correlations across cameras to boost the discrimination ability. Specifically, given image , we retrieve all positive proxies from different cameras, which share the same global pseudo label . Besides, the K-nearest negative proxies in all cameras are taken as the hard negative proxies, which are crucial to deal with the similarity across identity classes. The inter-camera contrastive learning loss aims to pull an image close to all positive proxies while push away from the mined hard negative proxies, as demonstrated in Figure 3(b). To this end, we define the loss as follows. where and denote the index sets of the positive and hard negative proxies, respectively. is the cardinality of . Moreover, . 4.3 A Summary of the Algorithm The proposed approach iteratively alternates between the camera-aware proxy clustering step and the intra- and inter-camera learning step. The entire loss for model learning is where is a parameter to balance two terms. We summarize the whole procedure in Algorithm 1. A proxy-balanced sampling strategy. A mini-batch in Algorithm 1 involves an update to the Re-ID model using a small set of samples. It is not trivial to choose appropriate samples in each batch. Traditional random sampling strategy may be overwhelmed by identities having more images than the others. Class-balanced sampling, that randomly chooses classes and samples per class as in Hermans et al. (2017), tends to sample an identity more frequently from image-rich cameras, causing ineffective learning for image-deficient cameras. To make samples more effective, we propose a proxy-balanced sampling strategy. In each mini-batch, we choose proxies and samples per proxy. This sampling strategy performs balanced optimization of all camera-aware proxies and enhances the learning of rare proxies, thus promoting the learning efficacy. 5.1 Experiment Setting Datasets and metrics. Market-1501 Zheng et al. (2015) contains 32,668 images of 1,501 identities captured by 6 disjoint cameras. It is split into three sets. The training set has 12,936 images of 751 identities, the query set has 3,368 images of 750 identities, and the gallery set contains 19,732 images of 750 identities. DukeMTMC-reID Zheng et al. (2017) is a subset of DukeMTMC Ristani et al. (2016). It contains 36,411 images of 1,812 identities captured by 8 cameras. Among them, 702 identities are used for training and the rest identities are for testing. MSMT17 Wei et al. (2018a) is the largest and most challenging dataset. It has 126,411 images of 4,101 identities captured in 15 camera views, containing both indoor and outdoor scenarios. 32,621 images of 1041 identities are for training, the rest including 82,621 gallery images and 11,659 query images are for testing. Performance is evaluated by the Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP), as the common practice. For the CMC measurement, we report Rank-1, Rank-5, and Rank-10. Note that no post-processing techniques like re-ranking Zhong et al. (2017) are used in our evaluation. We adopt an ImageNet-pretrained ResNet-50 He et al. (2016) as the network backbone. Based upon it, we remove the fully-connected classification layer, and add a Batch Normalization (BN) layer after the Global Average Pooling (GAP) layer. Thenormalized feature is used for the updating of proxies in the memory during training, and also for the distance ranking during inference. The memory updating rate is empirically set to be , the temperature factor is , the number of hard negative proxies is , and the balancing factor in Eq. (5) is . At the beginning of each epoch (i.e. iteration), we compute Jaccard distance with k-reciprocal nearest neighborsZhong et al. (2017) and use DBSCAN Ester et al. (1996) with a threshold of for the camera-agnostic global clustering. During training, only the intra-camera loss is used in the first 5 epochs. In the remaining epochs, both the intra- and inter-camera losses work together. The initial learning rate is with a warmup scheme in the first 10 epochs, and is divided by after each epochs. The total epoch number is . Each training batch consists of images randomly sampled from proxies with images per proxy. Random flipping, cropping and erasing are applied as data augmentation. 5.2 Ablation Studies In this subsection, we investigate the effectiveness of the proposed method by examining the intra- and inter-camera learning components, together with the proxy-balanced sampling strategy. For the purpose of reference, we first present the results of the baseline model introduced in section 3, as shown in Table 1. Then, we examine six variants of the proposed camera-aware proxy (CAP) assisted model, which are referred to as CAP1-6. Compared with the baseline model, the proposed full model (CAP6) significantly boosts the performance on all three datasets. The full model gains Rank-1 and mAP improvements on Market-1501, and Rank-1 and mAP improvements on DukeMTMC-ReID. Moreover, it dramatically boosts the performance on MSMT17, achieving Rank-1 and mAP improvements over the baseline. The MSMT17 dataset is a lot more challenging than the other two datasets, containing complex scenarios and appearance variations. The superior performance on MSMT17 shows that our full model gains an outstanding ability to deal with severe intra-ID variance. In the followings, we take a close look at each component. Effectiveness of the intra-camera learning. Compared with the baseline model, the intra-camera learning benefits from two aspects. 1) Each intra-camera Re-ID task is easier than the global counterpart because it deals with less number of IDs and smaller intra-ID variance. 2) Intra-camera learning suffers less from label noise since the per-camera pseudo labels are more reliable. These advantages enable the intra-camera learning to gain promising performance. As shown in Table 1, the CAP1 model which only employs the intra-camera loss, performs comparable to the baseline. When adopting the proxy-based sampling strategy, the CAP2 model outperforms the baseline on all datasets. In addition, we can also observe that the performance drops when removing the intra-camera loss from the full model (CAP4 vs. CAP6), validating the necessity of this component. Effectiveness of the inter-camera learning. Complementary to the above-mentioned intra-camera learning, the inter-camera learning improves the Re-ID model by explicitly exploiting the correlations across cameras. It not only can deal with the intra-ID variance via pulling positive proxies together, but also can tackle the inter-ID similarity problem via pushing hard negative proxies away. With this component, both CAP5 and CAP6 significantly boost the performance over CAP1 and CAP2 respectively. In addition, we find out that the inter-camera loss alone (CAP3) is able to produce decent performance, and adding the intra-camera loss or sampling strategy boosts performance further. Effectiveness of the proxy-balanced sampling strategy. The proxy-balanced sampling strategy is proposed to balance the various number of images contained in different proxies. To show that the proxy-balanced sampling strategy is indeed helpful, we compare it with the extensively used class-balanced strategy which ignores camera information. Table 1 shows that the models (CAP2, CAP4, and CAP6) using our sampling strategy are superior to the counterparts, validating the effectiveness of this strategy. |BUC Lin et al. (2019)||AAAI19||66.2||79.6||84.5||38.3||47.4||62.6||68.4||27.5||-||-||-||-| |UGA Wu et al. (2019b)||ICCV19||87.2||-||-||70.3||75.0||-||-||53.3||49.5||-||-||21.7| |SSL Lin et al. (2020)||CVPR20||71.7||83.8||87.4||37.8||52.5||63.5||68.9||28.6||-||-||-||-| |MMCL Wang and Zhang (2020)||CVPR20||80.3||89.4||92.3||45.5||65.2||75.9||80.0||40.2||35.4||44.8||49.8||11.2| |HCT Zeng et al. (2020)||CVPR20||80.0||91.6||95.2||56.4||69.6||83.4||87.4||50.7||-||-||-||-| |CycAs Wang et al. (2020b)||ECCV20||84.8||-||-||64.8||77.9||-||-||60.1||50.1||-||-||26.7| |SpCL Ge et al. (2020b)||NeurIPS20||88.1||95.1||97.0||73.1||-||-||-||-||42.3||55.6||61.2||19.1| |Unsupervised Domain Adaptation| |PUL Fan et al. (2018)||TOMM18||45.5||60.7||66.7||20.5||30.0||43.4||48.5||16.4||-||-||-||-| |SPGAN Deng et al. (2018b)||CVPR18||51.5||70.1||76.8||22.8||41.1||56.6||63.0||22.3||-||-||-||-| |ECN Zhong et al. (2019)||CVPR19||75.1||87.6||91.6||43.0||63.3||75.8||80.4||40.4||30.2||41.5||46.8||10.2| |pMR Wang et al. (2020a)||CVPR20||83.0||91.8||94.1||59.8||74.5||85.3||88.7||55.8||-||-||-||-| |MMCL Wang and Zhang (2020)||CVPR20||84.4||92.8||95.0||60.4||72.4||82.9||85.0||51.4||43.6||54.3||58.9||16.2| |AD-Cluster Zhai et al. (2020)||CVPR20||86.7||94.4||96.5||68.3||72.6||82.5||85.5||54.1||-||-||-||-| |MMT Ge et al. (2020a)||ICLR20||87.7||94.9||96.9||71.2||78.0||88.8||92.5||65.1||50.1||63.9||69.8||23.3| |SpCL Ge et al. (2020b)||NeurIPS20||90.3||96.2||97.7||76.7||82.9||90.1||92.5||68.8||53.1||65.8||70.5||26.5| |PCB Sun et al. (2018)||ECCV18||93.8||-||-||81.6||83.3||-||-||69.2||68.2||-||-||40.4| |ABD-Net Chen et al. (2019)||ICCV19||95.6||-||-||88.3||89.0||-||-||78.6||82.3||90.6||-||60.8| |CAP’s Upper Bound||This paper||93.3||97.5||98.4||85.1||87.7||93.7||95.4||76.0||77.1||87.4||90.8||53.7| Visualization of learned feature representations. In order to investigate how each learning component behaves, we utilize t-SNE van der Maaten and Hinton (2008) to visualize the feature representations learned by the baseline model, the intra-camera learned model CAP2, and the full model CAP6. Figure 4 presents the image features of 10 IDs taken from MSMT17. From the figure we observe that the baseline model fails to distinguish and , and , and . In contrast, the CAP2 model, which conducts the intra-camera learning only, separates and , and better. With the additional inter-camera learning component, the full model can distinguish most of the IDs, by greatly improving the intra-ID compactness and inter-ID separability. But it may still fail in some tough cases such as and . 5.3 Comparison with State-of-the-Arts In this section, we compare the proposed method (named as CAP) with state-of-the-art methods. The comparison results are summarized in Table 2. Comparison with purely unsupervised methods. Five most recent purely unsupervised methods are included for comparison, which are BUC Lin et al. (2019), UGA Wu et al. (2019b), SSL Lin et al. (2020), HCT Zeng et al. (2020), and CycAs Wang et al. (2020b). Both BUC and HCT are clustering-based, sharing the same technique with ours. Additionally, we also compare with MMCL Wang and Zhang (2020) and SpCL Ge et al. (2020b), two UDA-based methods working under the purely unsupervised setting. From the table, we observe that our proposed method outperforms all state-of-the-art counterparts by a great margin. For instance, compared with the second place method, our approach obtains Rank-1 and mAP gain on Market, Rank-1 and mAP gain on Duke, and Rank-1 and mAP gain on MSMT17. Comparison with UDA-based methods. Recent unsupervised works focus more on UDA techniques that exploit external labeled data to boost the performance. Table 2 presents eight UDA methods. Surprisingly, without using any labeled information, our approach outperforms seven of them on both Market and Duke, and is on par with SpCL. On the challenging MSMT17 dataset, our approach surpasses all methods by a great margin, achieving Rank-1 and mAP gain when compared to SpCL. Comparison with fully supervised methods. Finally, we provide two fully supervised method for reference, including one well-known method PCB Sun et al. (2018) and one state-of-the-art method ABD-Net Chen et al. (2019). We also report the performance of our network backbone trained with ground-truth labels, which indicates the upper bound of our approach. We observe that our unsupervised model (CAP) greatly mitigates the gap with PCB on all three datasets. Besides, there is still room for improvement if we could improve our backbone via integrating recent attention-based techniques like ABD-Net. In this paper, we have presented a novel camera-aware proxy assisted learning method for the purely unsupervised person Re-ID task. Our method is able to deal with the large intra-ID variance resulted from the change of camera views, which is crucial for a Re-ID model to improve performance. With the assistance of camera-aware proxies, our proposed intra- and inter-camera learning components effectively improve ID-discrimination within and across cameras, as validated by the experiments on three large-scale datasets. Comparisons with both purely unsupervised and UDA-based methods demonstrate the superiority of our method. - ABD-net: attentive but diverse person re-identification. In ICCV, Cited by: §1, §5.3, Table 2. - A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §1, §3. Deep association learning for unsupervised video person re-identification. In BMVC, Cited by: §1, §2.1. - Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In CVPR, Cited by: §2.1. - Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person reidentification. In CVPR, Cited by: Table 2. - A density-based algorithm for discovering clusters in large spatial databases with noise.. In Kdd, Cited by: §3, §5.1. - Unsupervised person re-identification: clustering and fine-tuning. ACM TOMM. Cited by: §1, §2.1, §3, §4, Table 2. - Mutual mean-teaching: pseudo label refinery for unsupervised domain adaptation on person re-identification. In ICLR, Cited by: Table 2. - Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. In NeurIPS, Cited by: §2.1, §5.3, Table 2. - Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Cited by: §3. - Deep residual learning for image recognition. In CVPR, Cited by: §5.1. - In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737. Cited by: §2.3, §4.3. Imagenet classification with deep convolutional neural networks. In NIPS, Cited by: §1, §3. Unsupervised person re-identification by deep learning tracklet association. In ECCV, Cited by: §1, §1, §2.1. - A bottom-up clustering approach to unsupervised person re-identification. In AAAI, Cited by: §1, §1, §1, §2.1, §3, §4, §5.3, Table 2. - Unsupervised person re-identification via softened similarity learning. In CVPR, Cited by: §5.3, Table 2. - Adaptive transfer network for cross-domain person re-identification. In CVPR, Cited by: §2.1. - No fuss distance metric learning using proxies. In ICCV, Cited by: §2.3. - Adversarial camera alignment network for unsupervised cross-camera person re-identification. arXiv preprint arXiv:1908.00862. Cited by: §2.2. - Progressive cross-camera soft-label learning for semi-supervised person re-identification. IEEE TCSVT. Cited by: §2.2. - A novel unsupervised camera-aware domain adaptation framework for person re-identification. In ICCV, Cited by: §1, §2.1. - SoftTriple loss: deep metric learning without triplet sampling. In ICCV, Cited by: §2.3. - Large-scale distance metric learning with uncertainty. In CVPR, Cited by: §2.3. - Metric learning with adaptive density discrimination. In ICLR, Cited by: §2.3. - Performance measures and a data set for multi-target, multi-camera tracking. In ECCV, Cited by: §5.1. - No subclass left behind: fine-grained robustness in coarse-grained classification problems. In NeurIPS, Cited by: §2.3. - Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, Cited by: §1, §5.3, Table 2. - Visualizing data using t-SNE. JMLR. Cited by: Figure 1, §5.2. - Unsupervised person re-identification via multi-label classification. In CVPR, Cited by: §5.3, Table 2. - Smoothing adversarial domain attack and p-memory reconsolidation for cross-domain person re-identification. In CVPR, Cited by: Table 2. - Towards precise intra-camera supervised person re-identification. In WACV, Cited by: §2.2. - CycAs: self-supervised cycle association for learning re-identifiable descriptions. In ECCV, Cited by: §5.3, Table 2. - Person transfer gan to bridge domain gap for person re-identification. In CVPR, Cited by: 3rd item, §5.1, §5.1. - Person transfer gan to bridge domain gap for person re-identification. In CVPR, Cited by: §1, §2.1. - Unsupervised person re-identification by camera-aware similarity consistency learning. In ICCV, Cited by: §2.1. - Unsupervised graph association for person re-identification. In ICCV, Cited by: §1, §1, §2.1, §5.3, Table 2. - Unsupervised feature learning via non-parametric instance discrimination. In CVPR, Cited by: §3, §3. - Dynamic label graph matching for unsupervised video re-identification.. In ICCV, Cited by: §1, §2.1. - Hierarchical clustering with hard-batch triplet loss for person re-identification. In CVPR, Cited by: §1, §1, §2.1, §3, §4, §5.3, Table 2. - Ad-cluster: augmented discriminative clustering for domain adaptive person re-identification. In CVPR, Cited by: §1, §2.1, §4, Table 2. - Scalable person re-identification: a benchmark. In ICCV, Cited by: 3rd item, §1, §5.1, §5.1. - Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In ICCV, Cited by: 3rd item, §5.1, §5.1. - Re-ranking person re-identification with k-reciprocal encoding. In CVPR, Cited by: §5.1, §5.1. - Invariance matters: exemplar memory for domain adaptive person re-identification. In CVPR, Cited by: §1, §2.1, §3, Table 2. - Intra-camera supervised person re-identification. arXiv preprint arXiv:2002.05046. Cited by: §2.2. - Intra-camera supervised person re-identification: a new benchmark. In ICCVW, Cited by: §2.2.
We have been clear that we have a distinct approach to Advanced Malware Protection (AMP), specifically the unique way in which we leverage the compute and storage capabilities of the public cloud. Doing so enables us to do a great number of things to help customers more effectively fight malware, particularly when compared to traditional, point-in-time anti-malware systems of the past 20 years. The news of high-profile targeted data center attacks has dominated security news recently. But data center attacks are even more prevalent than those headlines suggest. In fact, a survey conducted last summer by Network World suggests that 67 percent of data center administrators experienced downtime due to malware and related attacks in the previous 12 months. A key challenge is that many of today’s security solutions are simply not designed for the data center, with limitations in both provisioning and performance. The situation will likely get worse before it gets better as data center traffic grows exponentially and data centers migrate from physical, to virtual, to next-generation environments like Software-Defined Networks (SDN) and Application Centric Infrastructures (ACI). The increased scrutiny on security is being driven by the evolving trends of expanding networks, mobility, cloud computing and a threat landscape that is more dynamic than ever. A combination of these factors has led to an increase in attack access points and a re-definition of the traditional network perimeter. Due to these concerns, we have been strong proponents of threat-centric security that lets defenders address the full attack continuum and all attack vectors to respond at any time — before, during, and after attacks. We are all struggling with the Security problem today. Zero-day attacks and advanced persistent threats have outpaced the capabilities of traditional security methods that rely exclusively on single-point-in-time detection and blocking. There is a tremendous amount of complexity in our environments and security expertise is in short supply. At the same time, the movement to an Internet of Everything (IoE) is accelerating and creating significant opportunities for businesses and attackers alike as more people, processes, data, and things come online. This is why Cisco is steadfast in its charge of a threat-centric security model that addresses the full attack continuum – before, during, and after an attack. Organizations are quickly discovering that a “one size fits all” approach to security across the network falls short of addressing the unique trends in the Data Center. So what’s really that unique about the Data Center (DC)? This is a multi-part blog to highlight various trends related to securing the DC, with Part One focusing on traffic trends.
Application for wireless networking has been evolving rapidly and is becoming an integral part in our everyday life. Also with the recent performance advancement in wireless communication technologies, mobile wireless ad-hoc networks has been used in many areas such as military, health and commercial applications. Mobile ad hoc networks utilize radio waves and microwaves to maintain communication channel between computers. 802.11 (Wi-Fi) is the pre-eminent technology for building general purpose wireless networks. Mobile ad-hoc networking (MANET) utilize the Internet Protocol (IP) suite and aims at supporting robust and efficient operation by incorporating routing functionality into the mobile nodes. MANET is among one of the wireless networks that uses 802.11 to transmit data from the source to the destination. Since MANET is used in applications like defense, security is of vital importance due to its wireless nature. Wireless networks are vulnerable to attacks like eavesdropping, Man-In-The-Middle-Attack (MITM), hijacking, and so are MANETs. A malicious node can get within the wireless range of the nodes in the MANET and can disrupt the communication process. Various routing protocols have been proposed using encryption techniques to protect routing in MANETs. In this thesis, I implemented security encryption techniques (SHA-1 and RSA)in two reactive routing protocols which are Ad Hoc On Demand Distance Vector (AODV) routing protocol and Dynamic Source Routing (DSR) routing protocol and compared their network performance using performance evaluation parameters: Average end-to-end-delay, routing load, packet delivery fraction. Encryption techniques like SHA-1 and RSA were used to maintain the confidentiality and the integrity of the messages send by the nodes in the network. There have been several researches so for but no one has ever compared the performance of secured MANET protocols. I am going one step further by comparing the secured routing protocols which would be helpful in determining which protocol performs better that can be used in scenario where security is of utmost importance. Library of Congress Subject Headings Ad hoc networks (Computer networks); Routing protocols (Computer network protocols); Data encryption (Computer science); Computer networks--Security measures Jafferi, Jaseem, "Performance comparison between Ad Hoc On Demand Distance Vector and Dynamic Source Routing Protocols with security encryption using OPNET" (2012). Thesis. Rochester Institute of Technology. Accessed from RIT – Main Campus
One of the primary challenges confronting cybersecurity administrators is addressing the multitude and variety of attack, traverse, and attack strategies hackers use to penetrate IT infrastructure. Understanding these "attack paths" - routes from a point of compromise to an attractive and valuable target in an organization's network - becomes critical in devising a strategy to effectively block attacks. The approach of using attack paths in order to analyze a network's susceptibility requires determining not only how can a malicious actor enter the network but also how that actor would move through the network and what resources they would access in doing so. This is complicated by the fact that even a mid-size organization with a few hundred devices may exhibit tens of thousands of possible attack paths. Performing a manual analysis of each step in the attack and attempting to estimate a potential attacker's favored route(s) is a data and computationally expensive effort, an ideal task for a trained artificial intelligence system. Reveald's Epiphany Intelligence Platform performs such and is capable of addressing these issues by not only automating the discovery of attack paths but also prioritizing them based on the likelihood that each path would be used in a real attack. An attack path comprises three primary components: a foothold, a target, and a set of movements connecting the two. A foothold can be any tangible asset in a network including users, workstations, and other physical infrastructure. Typically, footholds susceptible to social engineering are most likely to provide an attacker with an opportunity to enter a network. Targets are more subjective than footholds. These are assets of value to the attacker (and likely the organization) which can range from application servers hosting collections of attractive data, workstations with sensitive data, or systems that act as powerful traversal points due to their privileged access. Identifying footholds and targets is just the first step toward mitigating attack paths. More challenging is the enumeration of the movements an attacker would make to obtain control over the target asset after securing a foothold in the network. A common approach to automation is to leverage reinforcement learning. This is a machine learning paradigm comprising an agent, an environment, and a reward function. The agent is a process that can iteratively update a decision-making policy based on previous experience. The policy can take several forms from hash-tables to deep neural networks. The environment is a collection of states that an agent can occupy; it also imposes constraints on the moves the agent can make. The reward function is an algorithmic model for measuring the quality of the agent's decisions. Together, these components define a robust framework for navigating an arbitrary state-space. Careful construction of such an agent results in effective mimicry of a purpose-driven actor. Epiphany combines a source-agnostic IT data model with the proper reinforcement learning implementation resulting in a system of automated pathfinding that can analyze network graphs in minutes and produce a prioritized list of attack paths for remediation. We term these pathfinders “EvilBots”, given their intent of replicating motivated cyber attackers' behavior to enter and traverse a target's network to seek a desired outcome in terms of access or information, but working benignly on behalf of the legitimate users of the system they investigate. The backbone of this implementation is Epiphany’s expert-curated reward function. This algorithm is designed to reinforce virtual agent policies that target highly susceptible footholds, make efficient movements through the network, and obtain control over both their targets and other valuable assets accessible along their trajectory. With that in hand, virtual agents are initialized with state-of-the-art deep neural networks that model decision-making policies. These models read in vector representations of the states available to the agent and produce a distribution of the relative probability that each state would be traversed in a real attack. In the context of network graphs, the states are network assets such as users, workstations, servers, IAM objects, etc. The process of training these models follows a basic (off-policy) reinforcement learning process. The agent uses its (initially untrained) model to choose a foothold, then continues choosing states until it reaches a target. At each step, it receives some reward. This experience is saved and sampled at regular intervals to train the neural networks. Training is accelerated through prioritized experience replay whereby trajectories that offer greater insight are sampled more frequently. After sufficient exploration, the agent switches to evaluating the network. To generate viable attack paths, the agent runs the same process as it did during training. The key difference is that the agent now explores all options, in order, and records each path it traverses. The result of this process is an ordered set of realistic attack paths. With these in hand, cybersecurity administrators are able to prioritize their work to remediate the issues that present the highest risk to their network. Leveraging Epiphany's EvilBots saves analysts' time by automating the discovery of attack paths; saves administrators' time by prioritizing those paths for remediation; and saves the organization money by enabling the former to anticipate and mitigate attacks early and often. Moreover these savings compound: organizations can invest these resources in hardening their network with the comfort of knowing Epiphany will be there to double-check their work, and be able to repeat the investigation without any additional setup making continuous validation a trivial activity. Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized Experience Replay (Version 4). arXiv.https://doi.org/10.48550/ARXIV.1511.05952 James acts as Reveald's Senior Data Scientist, using AI to drive material risk reduction for all our customers against a host of real cyber attackers while ensuring stable, secure access to any and all data Reveald's analysts need to deliver top-quality service to all of our customers. When he's not doing that, you can find James bouldering at his local gym, spending some quality time with his dog Cersei, or checking out NYC's hidden restaurant gems. Trusted by industry-leading organizations across the globe. Welcome to the new age of predictive cybersecurity. Leverage the power of AI to discover and prioritize cybersecurity risks, vulnerabilities and misconfigurations across your entire environment
Cybercriminals have been using new techniques which involves PowerPoint files and mouse over events, to get users to execute arbitrary code on their systems and download malware. It’s not uncommon to deliver malware using specially crafted Office files, particularly Word documents. These attacks depend on social engineering to trick the targeted user into enabling VBA macros embedded in the document. But now, a new attack has been discovered, which doesn’t require users to enable the macros. Those malicious PowerPoint files are distributing a malware called ‘Zusy,’ a banking Trojan. These files, named “order.ppsx” or “invoice.ppsx,” have been distributed via spam emails with titles such as “Purchase Order #130527” and “Confirmation.” The conducted analysis by Ruben Daniel Dodge shows that when the PowerPoint presentation is opened, it displays the text “Loading…Please wait” as a hyperlink. The PowerShell code is executed even if the user hovers the mouse over the link, even without clicking it. The Protected View security feature which is enabled by default in most supported versions of Office prompts the user to enable or disable the content. If the victim enables the content, the code is executed and a domain named “cccn.nl” is contacted. A file is downloaded which results into malware being downloaded and deployed. However it has been noticed that the attack does not work if the presentation is opened using the Powerpoint viewer and the recent versions of Office warns before the code gets executed. “Users might still somehow enable external programs because they’re lazy, in a hurry, or they’re only used to blocking macros. Also, some configurations may possibly be more permissive in executing external programs than they are with macros,” SentinelOne Labs said in a blog post.
Performing threat risk modeling using the Microsoft Threat Modeling Process Trike Trike is a threat modeling framework with similarities to the Microsoft threat modeling processes. Trike differs because it uses a risk based approach with distinct implementation, threat, and risk models, instead of using the STRIDE/DREAD aggregated threat model (attacks, threats, and weaknesses). Trike’s goals are: With assistance from the system stakeholders, to ensure that the risk this system entails to each asset is acceptable to all stakeholders. Be able to tell whether we have done this. Communicate what we’ve done and its effects to the stakeholders. Empower stakeholders to understand and reduce the risks to them and other stakeholders implied by their actions within their domains. The approach have three layers for the security model as is presented in the following picture. Not all the security models use the layered approach. The model is particularized depending of the magnitude of the organization that will implement it –For corporate level –For small and medium business Both of the models support the layered approach References http://www.us-cert.gov/cas/tips/ST04-015.html http://www.csoonline.com/article/515614/ddos-attacks-are-back-and-bigger-than-before- William Stallings, Cryptography and Network Security, Fourth Edition, 2005, Prentice Hall Mirkovic, J., and Relher, P. "A Taxonomy of DDoS Attack and DDoS Defense Mechanisms." ACM SIGCOMM Computer Communications Review, April 2004. http://searchsecurity.techtarget.com/magazineContent/Information-Security-magazine-online-October-2011 http://staff.washington.edu/dittrich/misc/ddos/ NIST – Risk Management Guide for Information Technology Systems http://www.gao.gov/special.pubs/ai00033.pdf http://en.wikipedia.org/wiki/Risk_management http://en.wikipedia.org/wiki/Risk_assessment http://www.sandia.gov/ram http://www.carnet.hr/CUC/cuc2004/program/radovi/a5_baca/a5_full.pdf Jelena Mirkovic, ven Dietrich, David Dittrich, Peter Reiher, Internet Denial of Service: Attack and Defense Mechanisms, Prentice Hall, 2005 http://cr.yp.to/syncookies.html http://cr.yp.to/syncookies.html http://www.ietf.org/rfc/rfc2267.txt http://en.wikipedia.org/wiki/Client_Puzzle_Protocol http://www.managementlink.com/index.php/help-and-information/business-glossaries/Network-Security- Glossary-10/M/Malformed-packet-attack-8152/http://www.managementlink.com/index.php/help-and-information/business-glossaries/Network-Security- Glossary-10/M/Malformed-packet-attack-8152/ http://www.cert.org/octave/ http://www.ietf.org/rfc/rfc2196.txt www.cisco.com/go/safe/ www.commoncriteria.nl/ https://www.owasp.org http://www.first.org/cvss