Generate BKS Certificates for SSL Pinning in Android Apps

 

  1. Ensure that the Certificate against which we are generating the certificate is already loaded on the web server
  2. Generate the PKCS12 format Certificate from the PEM Certs using the command below:
    
    
    openssl pkcs12 -export -out cert-complete.pfx -inkey cert-key.pem -in cert-leaf.pem -certfile cert-chain.pem
    
    Enter Export Password:
    
    Verifying - Enter Export Password:
  3. Ensure that a strong password is set atleast of 12 Characters (alphanumeric + Special Chars)
  4. Download the tool “Portecle” from the Website http://portecle.sourceforge.net/
  5. Run the Tool
  6. Load the PKCS12 SSL certificate that was generated in the Step#2
  7. Enter the password that was set
  8. You shall see the certificate getting listed
  9. Highlight the entry, navigate to “Tools → Change Keystore Type → BKS”
  10. Enter the same password again
  11. If the Password is correct, you shall see the following message:
  12. Now Select the same entry, and Select the option “File → Save Keystore As”
  13. Save the file on the desired location after setting a password if prompted. ENSURE THAT THE EXTENSION OF THE FILE IS .bks

Products from Amazon.in

Hits: 403

Build your own DoS/DDoS/Bot Mitigation Gear & Fighting Fake Google/Yahoo/Bing/Apple Bots

HAPROXY(Rate Limiter + BadUser Detection)

NGINX(rDNS + GeoIP)

Platforms used are completely Open Source. Please refer to their respective documentation from the respective links below:

NGINX: https://nginx.org/en/docs/

HAPROXY: http://www.haproxy.org/#docs

In this article I shall be discussing on how can we build our own Service Protection Layer which shall include modules to protect our web application against DoS, DDoS, Bad Bots based on User-Agents and a Web Application Firewall.

Any additional layer in our web application architecture, when we introduce an extra service layer, the chances of increase in latency is high. Hence for best results always follow the below flow when you are building a Service Protection Layer in front of your firewall.

HAPROXY ———-StaticContent————> Web Application

HAPROXY —–DynamicContent —–> NGINX ——> Web Application

Using HAPROXY we shall perform:

Rate Limiting

User-Agent Detection (Bot Mitigation)

Using NGINX we shall perform:

Fake Google/Yahoo/Bing Bots

GeoBlock

Please download the signatures from https://github.com/aarvee11/webclient-detection or feel free to write your own !

The art of detecting and blocking attacks is all about finding Signatures, Patterns and understanding the Attack Vector!

Rate Limiting: Rate Limiting is a technique used to limit a particular client from abusing our platform with over-whelming requests ensuring that the resources on the web application server are always available for the legitimate users. Rate Limiting for a web application should be gauged for 4 different parameters:

  1. Number of TCP Connections per client IP
  2. Rate of Incoming TCP Connections from a Client IP
  3. Rate of Incoming HTTP Request from a HTTP Client
  4. Rate of HTTP Errors getting generated by a HTTP Client

Note: Client at application/HTTP layer can be identified based on IP, Cookie, Parameter, IP+Cookie, IP+Cookie+User-Agent or any Other HTTP Header like Authorization etc.

When you are noticing a very huge traffic getting caught at Rate Controls, prepare yourself to start scaling your service protection later infra using techniques like AWS Auto-Scaling based on Network-In and Network-Out parameters at the instance level

Bad-bots: Most of the bots always leave a signature behind. User-Agent is one of the key-headers which detected the same. We shall be using the User-Agent comparison against a known bad bots list that can be downloaded from the git mentioned above

Fake Google/Yahoo/Bing/Applebots: Most of the web admins do not set any security controls on any of the friendly bots like mentioned above. Reason? Definitely if we block the above bots, our SEO, Visibility and Functionality also can take a hit as these are friendly bots and not bad-bots. How do they identify themselves? Connecting IP Address and User-Agent. All the above service providers clearly declare that they cannot publish their IP Address range as it keeps changing very dynamically and the only option left is to check the User-Agent which is consistent. But the challenge for a security administrator is that the User-Agent field is pretty much configurable and hence any client out there can start using the well known friendly bots User-Agent Signatures and cause harm to our application. Hence I adopted to the method of performing a reverse lookup approach for the connecting IP Address as suggested by Google! I am using the rDNS module to perform the connecting IP Address based on their User-Agents and when found they are fake are getting rejected automatically. Cool isn’t it?

GeoIP: ICANN distributes the IP Addresses all over the world and hence based on the same information, we can determine the country from which a client is getting connected and accessing our web application. What if we have been seeing a good amount of attacks from a specific country? As we cant keep mapping the IP Address, we can always opt for a Geo Blocking method by making NGINX Geo Aware using the GeoIP Module. Once done, we can choose which countries to be allowed explicitly and block others or vice-versa!

For the ease of administrators and not to keep this article very theoretical, I have provided config snippets below:

For any further queries please feel free to InMail me!

Happy Hunting folks!

HAPROXY Config Snippet:

In the Frontend Section:

=======================================

# Capture the Rate Limit Headers and Actual Client IP in Logs

 capture request header X-Haproxy-ACL len 256

 capture request header X-Bad-User len 64

 capture request header X-Forwarded-For len 64

 capture request header User-Agent len 256

 capture request header Host len 32

 capture request header X-Geo len 8

 capture request header X-Google-Bot len 8

 capture request header Referer len 256

 # Do Not Rate Control Google based bots. Fake Google bot attacks shall be filtered at NGINX

 acl google-ua hdr(user-agent) -i -f <path-to-webclient-detection-directory>/google-ua.lst

 http-request add-header X-Google-Bot %[req.fhdr(X-Google-Bot,-1)]Goog-YES, if google-ua

 # Define a table that will store IPs associated with counter

 stick-table type ip size 500k expire 30s store conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)

 # Enable tracking of src IP in the sticktable - Secops

 tcp-request content track-sc0 src

=======================================

# RATE LIMITING RULES

acl sensitive-urls path -i /api/app/login path -i /api/app/otp path -i /api/app/forgotpass

 # Reject the new connection if the client already has 100 opened

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-over-100-active-connections, if { src_conn_cur ge 100 }

 # Reject the new connection if the client has opened more than 65 TCP connections in 3 seconds

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-over-65-connections-in-3seconds, if { src_conn_rate ge 65 }

 # Reject the connection if the client has passed the HTTP error rate (10 HTTP Errors in 10 Seconds)

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-10-errors-in-10-seconds, if { sc0_http_err_rate() gt 10 }

 # Reject the connection if the client has passed the HTTP request rate (70 HTTP Requests in 10 Seconds)

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-70-HTTPRequests-in-10-seconds, if { sc0_http_req_rate() gt 70 } !listing

 # Reject Requests that are hitting more than 20 requests in 10 seconds interval for restaurant listing - Sensitive URLs

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-Listing, if { sc0_http_req_rate() gt 30 } sensitive-urls

=======================================

 # FLAGGING BAD-BOTS @ HAPROXY LEVEL

acl badbots hdr_sub(user-agent) -f <path-to-webclient-detection-directory>/bad-bots.lst

 acl nullua hdr_len(user-agent) 0

 acl availua hdr(user-agent) -m found

 acl ua-regex hdr_reg(user-agent) -i .+?[/\s][\d.]+

 acl tornodes src -f <path-to-webclient-detection-directory>/tor-exit-nodes.lst

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]BadBot, if badbots

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]No-UA, if nullua !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]No-UA, if !availua !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Invalid-UA, if !ua-regex !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Tor-Node, if tornodes

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Trace-Method, if http_trace

======================================= 

# FLAGGING SCANNERS @ HAPROXY LEVEL

acl scanner hdr_sub(user-agent) -f <path-to-webclient-detection-directory>/scanners.lst

http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Scanner, if scanner

 NGINX Config Snippet:

location / {

        #Enable Reverse DNS for all the Googlebots

 resolver 8.8.8.8;

 rdns_allow "(.*)google(.*)";

 rdns_allow "(.*)crawl.yahoo.net(.*)";

 rdns_allow "(.*)search.msn.com(.*)";

 rdns_allow "(.*)applebot.apple.com(.*)";

 rdns_deny "^(?!(.*)google(.*))|^(?!(.*)crawl.yahoo.net(.*))|^(?!(.*)search.msn.com(.*))|^(?!(.*)applebot.apple.com(.*))";

        if ($http_user_agent ~* "(.*)[Gg]oogle(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Ss]lurp(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Bb]ing(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Aa]pplebot(.*)") {

            rdns on;

        }

        # Set a Variable to take an action based on the result

        if ($rdns_hostname ~* ((.*)googlebot\.com)) {

            set $valid_bot 1;

        }

        if ($rdns_hostname ~* ((.*)google\.com)) {

            set $valid_bot 1;

        }

        # Return 403 is not Google

        if ($valid_bot = "0") {

            return 403;

        }

 #Geo Block Section

        if ($allowed_country = yes) {

            set $exclusions 1;

        }

        if ($exclusions = "0") {

            return 451;

        }

 

Products from Amazon.in

Hits: 656

10 Server-Side Configurations to Secure your HTTP Web Application !

1. CORS | Access-Control-Allow-Origin Policy:

CORS stands for Cross-Origin Resource Sharing, is a technique for relaxing the same-origin policy, allowing Javascript on a web page to consume a REST API served from a different origin. It is common for a modern website to include content delivered from another domain. Lets say there exists an API for POST, PUT or DELETE requests on your site. Then it’s likely that your server will send a CORS response header that will look something like this:

Access-Control-Allow-Origin: http://www.mytrustedsite.com

This means that your server will happily deliver content to “http://www.mytrustedsite.com“. Furthermore, and this is the significant part, if your API provides for it, “http://www.mytrustedsite.com” can add, alter or delete content on your database or web-server. That’s fine, because you designed your API to allow such transactions and you’ve explicitly given “http://www.mytrustedsite.com” permission to use your API.

However, if you’ve not set this explicitly, you might send the following header value:

Access-Control-Allow-Origin: *

The problem with this is that you are now allowing any website to interact with your API. If there is a website that has many vulnerabilities existing on it, and if that site starts leveraging your site content, all the Users who are accessing the site with vulnerabilities are posing a threat to your content which is being leveraged through it.

For further countermeasures, please refer to this link.

2. X-Permitted-Cross-Domain-Policies:

A cross-domain policy file is an XML document that grants a web client, such as Adobe Flash Player or Adobe Acrobat (though not necessarily limited to these), permission to handle data across domains. When clients request content hosted on a particular source domain and that content make requests directed towards a domain other than its own, the remote domain needs to host a cross-domain policy file that grants access to the source domain, allowing the client to continue the transaction. Normally a meta-policy is declared in the master policy file, but for those who can’t write to the root directory, they can also declare a meta-policy using the X-Permitted-Cross-Domain-Policies HTTP response header. Below are the available values and a short description about each value

When you don’t want to allow content producers to embed your work in their content, ensure you have no crossdomain.xml files within your website’s directory structure. You should also send the following header with each response from your web-server:

X-Permitted-Cross-Domain-Policies: none

3. MIME Sniffing:

MIME Sniffing, also known as Content Sniffing is a method using which the content rendering User-Agent shall detect the type of content being served by the server and auto-correct the same if not declared correctly. The same can happen if the metadata has not been configured the right way on the web server. But as it adds a ease of use, it also opens up security vulnerabilities using which an attacker can craft the payload and get scripts executed at the client’s end ending up in a XSS Attack.

Hence it is a very good practice to disable client side MIME sniffing and the same can be achieved by setting the following response header

X-Content-Type-Options: nosniff

Please refer to one of the references below:

https://blog.mozilla.org/security/2016/08/26/mitigating-mime-confusion-attacks-in-firefox

Before we go ahead with the next 2 points, I would like to introduce a concept called as ” Reconnaissance” which means gathering as much data as possible about the target before launching your attacks, in order to make the attack vectors more focused and effective. Reconnaissance is the Phase#1 for any ethical hacker out of a total of 5 phases.

4. Server Identifier:

The HTTP Response Header “server” provides us the identity of the web server which is handling the current web request. In a Ops point of view, it makes a lot easier for an Ops Engineer to know the same in order to debug/troubleshoot in case of any issues. Think of the attacker’s perspective. As the attacker now knows the type of server, he can now search the known CVE ids and already known and discovered vulnerabilities which can be launched over the target making the attacker’s job easier.

5. X-Powered-By:

The HTTP Response Header “X-Powered-By” provides us specifies the technology (e.g. ASP.NET, PHP, JBoss) supporting the web application (version details are often in X-Runtime, X-Version, or X-AspNet-Version). Same as said above, lets stick to the concept below:

DO NOT GIVE AWAY THE INFORMATION ABOUT YOUR PRECIOUS SERVER STACK INFO. LET IT BE THE SECRET SAUCE OF YOUR WEB SERVICE APPLICATION.

6. Cross-site scripting (XSS) filter:

Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site. These scripts can even rewrite the content of the HTML page.

For further classification of the XSS Types please refer to Types of Cross-Site Scripting

The Response Header below enables the XSS Filter on your browser ensuring that any suspected XSS Attack is mitigated at the browser level itself

X-XSS-Protection: 1; mode=block

Valid Values for the header and a short description has been provided below:

7. Clickjacking:

Clickjacking, also known as a “UI redress attack”, is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the the top level page. Thus, the attacker is “hijacking” clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker.

The Response Header below prohibits Clickjacking by declaring a policy communicated from a host to the client browser on whether the browser must not display the transmitted content in frames of other web pages.

X-Frame-Options: deny

Valid Values for the header and a short description has been provided below:

8. Content Security Policy:

Content Security Policy (CSP) is an effective “defense in depth” technique to be used against content injection attacks. It is a declarative policy that informs the user agent what are valid sources to load the content from.

Since, it was introduced in Firefox version 4 by Mozilla and now supported by Firefox 23+, Chrome 25+ and Opera 19+. It has been adopted as a standard, and now grown in adoption and capabilities.

Enabling CSP can sometimes even break your existing vulnerabilities and hence a web security administrator needs to carefully examine the same before implementing the same over an existing and running application. If the same can be enabled for an App which is not yet published right from the start, it is the best approach indeed.

Below is an example for a CSP Header:

Content-Security-Policy: default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';

Directives (Source: OWASP LINK)

The following is a listing of directives, and a brief description.

CSP 1.0 Spec

  • connect-src (d) – restricts which URLs the protected resource can load using script interfaces. (e.g. send() method of an XMLHttpRequest object)
  • font-src (d) – restricts from where the protected resource can load fonts
  • img-src (d) – restricts from where the protected resource can load images
  • media-src (d) – restricts from where the protected resource can load video, audio, and associated text tracks
  • object-src (d) – restricts from where the protected resource can load plugins
  • script-src (d) – restricts which scripts the protected resource can execute. Additional restrictions against, inline scripts, and eval. Additional directives in CSP2 for hash and nonce support
  • style-src (d) – restricts which styles the user may applies to the protected resource. Additional restrictions against inline and eval.
  • default-src – Covers any directive with (d)
  • frame-src – restricts from where the protected resource can embed frames. Note, deprecated in CSP2
  • report-uri – specifies a URL to which the user agent sends reports about policy violation
  • sandbox – specifies an HTML sandbox policy that the user agent applies to the protected resource. Optional in 1.0

New in CSP2

  • form-action – retricts which URLs can be used as the action of HTML form elements
  • frame-ancestors – indicates whether the user agent should allow embedding the resource using a frame, iframe, object, embed or applet element, or equivalent functionality in non-HTML resources
  • plugin-types – restricts the set of plugins that can be invoked by the protected resource by limiting the types of resources that can be embedded
  • base-uri – restricts the URLs that can be used to specify the document base URL
  • child-src (d) – governs the creation of nested browsing contexts as well as Worker execution contexts

9. Deploy TLS:

TLS commonly even known as SSL is the predecessor of the the later. SSL/TLS are protocols used for encrypting information between two points. It is usually between server and client, but there are times when server to server and client to client encryption are needed. There are multiple advantages of enabling TLS Encryption for your web services. Some of them are:

  • Introduces strong encryption of all data between the client and server
  • Protects against packet sniffing
  • Protects against man-in-the-middle attacks
  • When your users see that padlock in their web-browsers, they know that their connection is with your website, a trusted source.

TLS is aa very deep topic which has already published by me in a different blog and you can refer to the same from the LINK

In order to check your TLS/SSL Settings which have been tuned on your web services, please use the tool testssl.sh which is available to download and runs on any *nix based systems. As well there is one more free tool which can analyze and give you a Rating + Suggestions to improve the Site SSL/TLS Security Levels. Do look at the free online tool from Qualys SSL Labs: SSL Server Test. Always try aiming for an A+ Grade!

10. Strict Transport Security:

HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797. A server implements an HSTS policy by supplying a header (Strict-Transport-Security) over an HTTPS connection (HSTS headers over HTTP are ignored).

The following Response Header ensures the same:

Strict-Transport-Security: max-age=31536000 ; includeSubDomains

Values

With all the measures above, it does not mean that your web service can be completely invincible. Always have a healthy Security Testing Lifecycle in your SDLC, Have a dedicated Web Application Firewall(shall be discussing my views in a separate article) which is being well maintained with the latest rulesets and adequate logging and alerting in place and most importantly have a highly skilled Security Engineering Team which shall have a combination of both Offensive and Defensive Security Engineers

Keep Defending!

Products from Amazon.in

Hits: 222

What is SSL – A Deep Dive

ssl_handshake_with_two_way_authentication_with_certificates

SSL/TLS are protocols used for encrypting information between two points. It is usually between server and client, but there are times when server to server and client to client encryption are needed. For the purpose of this blog, I will focus only on the negotiation between server and client.

For SSL/TLS negotiation to take place, the system administrator must prepare the minimum of 2 files: Private Key and Certificate. When requesting from a Certificate Authority such as Symantec Trust Services, an additional file must be created. This file is called Certificate Signing Request, generated from the Private Key. The process for generating the files are dependent on the software that will be using the files for encryption.

Additional certificates called Intermediate Certificate Authority Certificates and Certificate Authority Root Certificates may need to be installed on the server. This is again server software dependent. There is usually no need to install the Intermediate and Root CA files on the client applications or browsers.

Certificate-Version Number-Serial Number-Signature Algorithm ID-Issuer Name-Validity period--Not Before--Not After-Subject name-Subject Public Key Info--Public Key Algorithm--Subject Public Key-Issuer Unique Identifier (optional)-Subject Unique Identifier (optional)-Extensions (optional)
Certificate Signature Algorithm
Certificate Signature


The following is a standard SSL handshake when RSA key exchange algorithm is used: (Please refer to the diagram Source: Wikimedia)

Client Hello


– Information that the server needs to communicate with the client using SSL.
– Including SSL version number, cipher settings, session-specific data.

Server Hello


– Information that the client needs to communicate with the server using SSL.
– Including SSL version number, cipher settings, session-specific data.
– Including Server’s Certificate (Public Key)

Authentication and Pre-Master Secret


– Client authenticates the server certificate. (e.g. Common Name / Date / Issuer)
– Client (depending on the cipher) creates the pre-master secret for the session,
– Encrypts with the server’s public key and sends the encrypted pre-master secret to the server.

Decryption and Master Secret
– Server uses its private key to decrypt the pre-master secret,
– Both Server and Client perform steps to generate the master secret with the agreed cipher.

Generate Session Keys
– Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session

Encryption with Session Key


– Both client and server exchange messages to inform that future messages will be encrypted.

Source: Wikipedia

Tools such as OpenSSL can be used check the SSL/TLS negotiations. Try running the command below on a Linux/Mac/Windows Machine which has the latest OpenSSL Version installed on it and see the results:

“openssl s_client -connect amisafe.secops.in:443 -ssl3”

“openssl s_client -connect amisafe.secops.in:443 -tls1”

“openssl s_client -connect amisafe.secops.in:443 -tls1.1”

“openssl s_client -connect amisafe.secops.in:443 -tls1.2”

 

Products from Amazon.in

Hits: 363

Why am I here?

We shall be discussing the best security practices in our day to day digital world which shall include SmartPhones, Debit/Credit Cards, Intenet Banking, Email Phishing and many more….

Keep watching this space !

 

 

 

Products from Amazon.in

Hits: 31