The RIGHT way of Password Hashing !

If you’re a web developer, you’ve probably had to make a user account system. The most important aspect of a user account system is how user passwords are protected. User account databases are hacked frequently, so you absolutely must do something to protect your users’ passwords if your website is ever breached. The best way to protect passwords is to employ salted password hashing. This page will explain why it’s done the way it is.

There are a lot of conflicting ideas and misconceptions on how to do password hashing properly, probably due to the abundance of misinformation on the web. Password hashing is one of those things that’s so simple, but yet so many people get wrong. With this page, I hope to explain not only the correct way to do it, but why it should be done that way.

IMPORTANT WARNING: If you are thinking of writing your own password hashing code, please don’t!. It’s too easy to screw up. No, that cryptography course you took in university doesn’t make you exempt from this warning. This applies to everyone: DO NOT WRITE YOUR OWN CRYPTO! The problem of storing passwords has already been solved. Use either use either phpass, the PHP, C#, Java, and Ruby implementations in defuse/password-hashing, or libsodium.

If for some reason you missed that big red warning note, please go read it now. Really, this guide is not meant to walk you through the process of writing your own storage system, it’s to explain the reasons why passwords should be stored a certain way

What is password hashing?

hash("hello") = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hbllo") = 58756879c05c68dfac9866712fad6a93f8146f337a69afe7dd238f3364946366
hash("waltz") = c0e81794384491161f1777c232bc6bd9ec38f616560b120fda8e90f383853542

Hash algorithms are one way functions. They turn any amount of data into a fixed-length “fingerprint” that cannot be reversed. They also have the property that if the input changes by even a tiny bit, the resulting hash is completely different (see the example above). This is great for protecting passwords, because we want to store passwords in a form that protects them even if the password file itself is compromised, but at the same time, we need to be able to verify that a user’s password is correct.

The general workflow for account registration and authentication in a hash-based account system is as follows:

  1. The user creates an account.
  2. Their password is hashed and stored in the database. At no point is the plain-text (unencrypted) password ever written to the hard drive.
  3. When the user attempts to login, the hash of the password they entered is checked against the hash of their real password (retrieved from the database).
  4. If the hashes match, the user is granted access. If not, the user is told they entered invalid login credentials.
  5. Steps 3 and 4 repeat every time someone tries to login to their account.

In step 4, never tell the user if it was the username or password they got wrong. Always display a generic message like “Invalid username or password.” This prevents attackers from enumerating valid usernames without knowing their passwords.

It should be noted that the hash functions used to protect passwords are not the same as the hash functions you may have seen in a data structures course. The hash functions used to implement data structures such as hash tables are designed to be fast, not secure. Only cryptographic hash functions may be used to implement password hashing. Hash functions like SHA256, SHA512, RipeMD, and WHIRLPOOL are cryptographic hash functions.

It is easy to think that all you have to do is run the password through a cryptographic hash function and your users’ passwords will be secure. This is far from the truth. There are many ways to recover passwords from plain hashes very quickly. There are several easy-to-implement techniques that make these “attacks” much less effective. To motivate the need for these techniques, consider this very website. On the front page, you can submit a list of hashes to be cracked, and receive results in less than a second. Clearly, simply hashing the password does not meet our needs for security.

The next section will discuss some of the common attacks used to crack plain password hashes.

How Hashes are Cracked

  • Dictionary and Brute Force Attacks

    Dictionary Attack

    Trying apple        : failed
    Trying blueberry    : failed
    Trying justinbeiber : failed
    ...
    Trying letmein      : failed
    Trying s3cr3t       : success!
    Brute Force Attack

    Trying aaaa : failed
    Trying aaab : failed
    Trying aaac : failed
    ...
    Trying acdb : failed
    Trying acdc : success!
  • The simplest way to crack a hash is to try to guess the password, hashing each guess, and checking if the guess’s hash equals the hash being cracked. If the hashes are equal, the guess is the password. The two most common ways of guessing passwords are dictionary attacks and brute-force attacks.A dictionary attack uses a file containing words, phrases, common passwords, and other strings that are likely to be used as a password. Each word in the file is hashed, and its hash is compared to the password hash. If they match, that word is the password. These dictionary files are constructed by extracting words from large bodies of text, and even from real databases of passwords. Further processing is often applied to dictionary files, such as replacing words with their “leet speak” equivalents (“hello” becomes “h3110”), to make them more effective.A brute-force attack tries every possible combination of characters up to a given length. These attacks are very computationally expensive, and are usually the least efficient in terms of hashes cracked per processor time, but they will always eventually find the password. Passwords should be long enough that searching through all possible character strings to find it will take too long to be worthwhile.There is no way to prevent dictionary attacks or brute force attacks. They can be made less effective, but there isn’t a way to prevent them altogether. If your password hashing system is secure, the only way to crack the hashes will be to run a dictionary or brute-force attack on each hash.
  • Lookup Tables

    Searching: 5f4dcc3b5aa765d61d8327deb882cf99: FOUND: password5
    Searching: 6cbe615c106f422d23669b610b564800:  not in database
    Searching: 630bf032efe4507f2c57b280995925a9: FOUND: letMEin12 
    Searching: 386f43fab5d096a7a66d67c8f213e5ec: FOUND: mcd0nalds
    Searching: d5ec75d5fe70d428685510fae36492d9: FOUND: p@ssw0rd!

    Lookup tables are an extremely effective method for cracking many hashes of the same type very quickly. The general idea is to pre-compute the hashes of the passwords in a password dictionary and store them, and their corresponding password, in a lookup table data structure. A good implementation of a lookup table can process hundreds of hash lookups per second, even when they contain many billions of hashes.

    If you want a better idea of how fast lookup tables can be, try cracking the following sha256 hashes with CrackStation’s free hash cracker.

    c11083b4b0a7743af748c85d343dfee9fbb8b2576c05f3a7f0d632b0926aadfc
    08eac03b80adc33dc7d8fbe44b7c7b05d3a2c511166bdb43fcb710b03ba919e7
    e4ba5cbd251c98e6cd1c23f126a3b81d8d8328abc95387229850952b3ef9f904
    5206b8b8a996cf5320cb12ca91c7b790fba9f030408efe83ebb83548dc3007bd
  • Reverse Lookup Tables

    Searching for hash(apple) in users' hash list...     : Matches [alice3, 0bob0, charles8]
    Searching for hash(blueberry) in users' hash list... : Matches [usr10101, timmy, john91]
    Searching for hash(letmein) in users' hash list...   : Matches [wilson10, dragonslayerX, joe1984]
    Searching for hash(s3cr3t) in users' hash list...    : Matches [bruce19, knuth1337, john87]
    Searching for hash(z@29hjja) in users' hash list...  : No users used this password

    This attack allows an attacker to apply a dictionary or brute-force attack to many hashes at the same time, without having to pre-compute a lookup table.

    First, the attacker creates a lookup table that maps each password hash from the compromised user account database to a list of users who had that hash. The attacker then hashes each password guess and uses the lookup table to get a list of users whose password was the attacker’s guess. This attack is especially effective because it is common for many users to have the same password.

  • Rainbow Tables

    Rainbow tables are a time-memory trade-off technique. They are like lookup tables, except that they sacrifice hash cracking speed to make the lookup tables smaller. Because they are smaller, the solutions to more hashes can be stored in the same amount of space, making them more effective. Rainbow tables that can crack any md5 hash of a password up to 8 characters long exist.

Next, we’ll look at a technique called salting, which makes it impossible to use lookup tables and rainbow tables to crack a hash.

Adding Salt

hash("hello")                    = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hello" + "QxLUF1bgIAdeQX") = 9e209040c863f84a31e719795b2577523954739fe5ed3b58a75cff2127075ed1
hash("hello" + "bv5PehSMfV11Cd") = d1d3ec2e6f20fd420d50e2642992841d8338a314b8ea157c9e18477aaef226ab
hash("hello" + "YYLmfY6IehjZMQ") = a49670c3c18b9e079b9cfaf51634f563dc8ae3070db2c4a8544305df1b60f007

Lookup tables and rainbow tables only work because each password is hashed the exact same way. If two users have the same password, they’ll have the same password hashes. We can prevent these attacks by randomizing each hash, so that when the same password is hashed twice, the hashes are not the same.

We can randomize the hashes by appending or prepending a random string, called a salt, to the password before hashing. As shown in the example above, this makes the same password hash into a completely different string every time. To check if a password is correct, we need the salt, so it is usually stored in the user account database along with the hash, or as part of the hash string itself.

The salt does not need to be secret. Just by randomizing the hashes, lookup tables, reverse lookup tables, and rainbow tables become ineffective. An attacker won’t know in advance what the salt will be, so they can’t pre-compute a lookup table or rainbow table. If each user’s password is hashed with a different salt, the reverse lookup table attack won’t work either.

In the next section, we’ll look at how salt is commonly implemented incorrectly.

The WRONG Way: Short Salt & Salt Reuse

The most common salt implementation errors are reusing the same salt in multiple hashes, or using a salt that is too short.

Salt Reuse

A common mistake is to use the same salt in each hash. Either the salt is hard-coded into the program, or is generated randomly once. This is ineffective because if two users have the same password, they’ll still have the same hash. An attacker can still use a reverse lookup table attack to run a dictionary attack on every hash at the same time. They just have to apply the salt to each password guess before they hash it. If the salt is hard-coded into a popular product, lookup tables and rainbow tables can be built for that salt, to make it easier to crack hashes generated by the product.

A new random salt must be generated each time a user creates an account or changes their password.

Short Salt

If the salt is too short, an attacker can build a lookup table for every possible salt. For example, if the salt is only three ASCII characters, there are only 95x95x95 = 857,375 possible salts. That may seem like a lot, but if each lookup table contains only 1MB of the most common passwords, collectively they will be only 837GB, which is not a lot considering 1000GB hard drives can be bought for under $100 today.

For the same reason, the username shouldn’t be used as a salt. Usernames may be unique to a single service, but they are predictable and often reused for accounts on other services. An attacker can build lookup tables for common usernames and use them to crack username-salted hashes.

To make it impossible for an attacker to create a lookup table for every possible salt, the salt must be long. A good rule of thumb is to use a salt that is the same size as the output of the hash function. For example, the output of SHA256 is 256 bits (32 bytes), so the salt should be at least 32 random bytes.

The WRONG Way: Double Hashing & Wacky Hash Functions

This section covers another common password hashing misconception: wacky combinations of hash algorithms. It’s easy to get carried away and try to combine different hash functions, hoping that the result will be more secure. In practice, though, there is very little benefit to doing it. All it does is create interoperability problems, and can sometimes even make the hashes less secure. Never try to invent your own crypto, always use a standard that has been designed by experts. Some will argue that using multiple hash functions makes the process of computing the hash slower, so cracking is slower, but there’s a better way to make the cracking process slower as we’ll see later.

Here are some examples of poor wacky hash functions I’ve seen suggested in forums on the internet.

md5(sha1(password))
-------
md5(md5(salt) + md5(password))
-------
sha1(sha1(password))
-------
sha1(str_rot13(password + salt))
-------
md5(sha1(md5(md5(password) + sha1(password)) + md5(password)))

Do not use any of these.

Note: This section has proven to be controversial. I’ve received a number of emails arguing that wacky hash functions are a good thing, because it’s better if the attacker doesn’t know which hash function is in use, it’s less likely for an attacker to have pre-computed a rainbow table for the wacky hash function, and it takes longer to compute the hash function.

An attacker cannot attack a hash when he doesn’t know the algorithm, but note Kerckhoffs’s principle, that the attacker will usually have access to the source code (especially if it’s free or open source software), and that given a few password-hash pairs from the target system, it is not difficult to reverse engineer the algorithm. It does take longer to compute wacky hash functions, but only by a small constant factor. It’s better to use an iterated algorithm that’s designed to be extremely hard to parallelize (these are discussed below). And, properly salting the hash solves the rainbow table problem.

If you really want to use a standardized “wacky” hash function like HMAC, then it’s OK. But if your reason for doing so is to make the hash computation slower, read the section below about key stretching first.

Compare these minor benefits to the risks of accidentally implementing a completely insecure hash function and the interoperability problems wacky hashes create. It’s clearly best to use a standard and well-tested algorithm.

Hash Collisions

Because hash functions map arbitrary amounts of data to fixed-length strings, there must be some inputs that hash into the same string. Cryptographic hash functions are designed to make these collisions incredibly difficult to find. From time to time, cryptographers find “attacks” on hash functions that make finding collisions easier. A recent example is the MD5 hash function, for which collisions have actually been found.

Collision attacks are a sign that it may be more likely for a string other than the user’s password to have the same hash. However, finding collisions in even a weak hash function like MD5 requires a lot of dedicated computing power, so it is very unlikely that these collisions will happen “by accident” in practice. A password hashed using MD5 and salt is, for all practical purposes, just as secure as if it were hashed with SHA256 and salt. Nevertheless, it is a good idea to use a more secure hash function like SHA256, SHA512, RipeMD, or WHIRLPOOL if possible.

The RIGHT Way: How to Hash Properly

This section describes exactly how passwords should be hashed. The first subsection covers the basics—everything that is absolutely necessary. The following subsections explain how the basics can be augmented to make the hashes even harder to crack.

The Basics: Hashing with Salt

Warning: Do not just read this section. You absolutely must implement the stuff in the next section: “Making Password Cracking Harder: Slow Hash Functions”.

We’ve seen how malicious hackers can crack plain hashes very quickly using lookup tables and rainbow tables. We’ve learned that randomizing the hashing using salt is the solution to the problem. But how do we generate the salt, and how do we apply it to the password?

Salt should be generated using a Cryptographically Secure Pseudo-Random Number Generator (CSPRNG). CSPRNGs are very different than ordinary pseudo-random number generators, like the “C” language’s rand() function. As the name suggests, CSPRNGs are designed to be cryptographically secure, meaning they provide a high level of randomness and are completely unpredictable. We don’t want our salts to be predictable, so we must use a CSPRNG. The following table lists some CSPRNGs that exist for some popular programming platforms.

Platform CSPRNG
PHP mcrypt_create_ivopenssl_random_pseudo_bytes
Java java.security.SecureRandom
Dot NET (C#, VB) System.Security.Cryptography.RNGCryptoServiceProvider
Ruby SecureRandom
Python os.urandom
Perl Math::Random::Secure
C/C++ (Windows API) CryptGenRandom
Any language on GNU/Linux or Unix Read from /dev/random or /dev/urandom

The salt needs to be unique per-user per-password. Every time a user creates an account or changes their password, the password should be hashed using a new random salt. Never reuse a salt. The salt also needs to be long, so that there are many possible salts. As a rule of thumb, make your salt is at least as long as the hash function’s output. The salt should be stored in the user account table alongside the hash.

To Store a Password

  1. Generate a long random salt using a CSPRNG.
  2. Prepend the salt to the password and hash it with a standard password hashing function like Argon2, bcrypt, scrypt, or PBKDF2.
  3. Save both the salt and the hash in the user’s database record.

To Validate a Password

  1. Retrieve the user’s salt and hash from the database.
  2. Prepend the salt to the given password and hash it using the same hash function.
  3. Compare the hash of the given password with the hash from the database. If they match, the password is correct. Otherwise, the password is incorrect.

In a Web Application, always hash on the server

If you are writing a web application, you might wonder where to hash. Should the password be hashed in the user’s browser with JavaScript, or should it be sent to the server “in the clear” and hashed there?

Even if you are hashing the user’s passwords in JavaScript, you still have to hash the hashes on the server. Consider a website that hashes users’ passwords in the user’s browser without hashing the hashes on the server. To authenticate a user, this website will accept a hash from the browser and check if that hash exactly matches the one in the database. This seems more secure than just hashing on the server, since the users’ passwords are never sent to the server, but it’s not.

The problem is that the client-side hash logically becomes the user’s password. All the user needs to do to authenticate is tell the server the hash of their password. If a bad guy got a user’s hash they could use it to authenticate to the server, without knowing the user’s password! So, if the bad guy somehow steals the database of hashes from this hypothetical website, they’ll have immediate access to everyone’s accounts without having to guess any passwords.

This isn’t to say that you shouldn’t hash in the browser, but if you do, you absolutely have to hash on the server too. Hashing in the browser is certainly a good idea, but consider the following points for your implementation:

  • Client-side password hashing is not a substitute for HTTPS (SSL/TLS). If the connection between the browser and the server is insecure, a man-in-the-middle can modify the JavaScript code as it is downloaded to remove the hashing functionality and get the user’s password.
  • Some web browsers don’t support JavaScript, and some users disable JavaScript in their browser. So for maximum compatibility, your app should detect whether or not the browser supports JavaScript and emulate the client-side hash on the server if it doesn’t.
  • You need to salt the client-side hashes too. The obvious solution is to make the client-side script ask the server for the user’s salt. Don’t do that, because it lets the bad guys check if a username is valid without knowing the password. Since you’re hashing and salting (with a good salt) on the server too, it’s OK to use the username (or email) concatenated with a site-specific string (e.g. domain name) as the client-side salt.

Making Password Cracking Harder: Slow Hash Functions

Salt ensures that attackers can’t use specialized attacks like lookup tables and rainbow tables to crack large collections of hashes quickly, but it doesn’t prevent them from running dictionary or brute-force attacks on each hash individually. High-end graphics cards (GPUs) and custom hardware can compute billions of hashes per second, so these attacks are still very effective. To make these attacks less effective, we can use a technique known as key stretching.

The idea is to make the hash function very slow, so that even with a fast GPU or custom hardware, dictionary and brute-force attacks are too slow to be worthwhile. The goal is to make the hash function slow enough to impede attacks, but still fast enough to not cause a noticeable delay for the user.

Key stretching is implemented using a special type of CPU-intensive hash function. Don’t try to invent your own–simply iteratively hashing the hash of the password isn’t enough as it can be parallelized in hardware and executed as fast as a normal hash. Use a standard algorithm like PBKDF2 or bcrypt. You can find a PHP implementation of PBKDF2 here.

These algorithms take a security factor or iteration count as an argument. This value determines how slow the hash function will be. For desktop software or smartphone apps, the best way to choose this parameter is to run a short benchmark on the device to find the value that makes the hash take about half a second. This way, your program can be as secure as possible without affecting the user experience.

If you use a key stretching hash in a web application, be aware that you will need extra computational resources to process large volumes of authentication requests, and that key stretching may make it easier to run a Denial of Service (DoS) attack on your website. I still recommend using key stretching, but with a lower iteration count. You should calculate the iteration count based on your computational resources and the expected maximum authentication request rate. The denial of service threat can be eliminated by making the user solve a CAPTCHA every time they log in. Always design your system so that the iteration count can be increased or decreased in the future.

If you are worried about the computational burden, but still want to use key stretching in a web application, consider running the key stretching algorithm in the user’s browser with JavaScript. The Stanford JavaScript Crypto Library includes PBKDF2. The iteration count should be set low enough that the system is usable with slower clients like mobile devices, and the system should fall back to server-side computation if the user’s browser doesn’t support JavaScript. Client-side key stretching does not remove the need for server-side hashing. You must hash the hash generated by the client the same way you would hash a normal password.

Impossible-to-crack Hashes: Keyed Hashes and Password Hashing Hardware

As long as an attacker can use a hash to check whether a password guess is right or wrong, they can run a dictionary or brute-force attack on the hash. The next step is to add a secret key to the hash so that only someone who knows the key can use the hash to validate a password. This can be accomplished two ways. Either the hash can be encrypted using a cipher like AES, or the secret key can be included in the hash using a keyed hash algorithm like HMAC.

This is not as easy as it sounds. The key has to be kept secret from an attacker even in the event of a breach. If an attacker gains full access to the system, they’ll be able to steal the key no matter where it is stored. The key must be stored in an external system, such as a physically separate server dedicated to password validation, or a special hardware device attached to the server such as the YubiHSM.

I highly recommend this approach for any large scale (more than 100,000 users) service. I consider it necessary for any service hosting more than 1,000,000 user accounts.

If you can’t afford multiple dedicated servers or special hardware devices, you can still get some of the benefits of keyed hashes on a standard web server. Most databases are breached using SQL Injection Attacks, which, in most cases, don’t give attackers access to the local filesystem (disable local filesystem access in your SQL server if it has this feature). If you generate a random key and store it in a file that isn’t accessible from the web, and include it into the salted hashes, then the hashes won’t be vulnerable if your database is breached using a simple SQL injection attack. Don’t hard-code a key into the source code, generate it randomly when the application is installed. This isn’t as secure as using a separate system to do the password hashing, because if there are SQL injection vulnerabilities in a web application, there are probably other types, such as Local File Inclusion, that an attacker could use to read the secret key file. But, it’s better than nothing.

Please note that keyed hashes do not remove the need for salt. Clever attackers will eventually find ways to compromise the keys, so it is important that hashes are still protected by salt and key stretching.

Other Security Measures

Password hashing protects passwords in the event of a security breach. It does not make the application as a whole more secure. Much more must be done to prevent the password hashes (and other user data) from being stolen in the first place.

Even experienced developers must be educated in security in order to write secure applications. A great resource for learning about web application vulnerabilities is The Open Web Application Security Project (OWASP). A good introduction is the OWASP Top Ten Vulnerability List. Unless you understand all the vulnerabilities on the list, do not attempt to write a web application that deals with sensitive data. It is the employer’s responsibility to ensure all developers are adequately trained in secure application development.

Having a third party “penetration test” your application is a good idea. Even the best programmers make mistakes, so it always makes sense to have a security expert review the code for potential vulnerabilities. Find a trustworthy organization (or hire staff) to review your code on a regular basis. The security review process should begin early in an application’s life and continue throughout its development.

It is also important to monitor your website to detect a breach if one does occur. I recommend hiring at least one person whose full time job is detecting and responding to security breaches. If a breach goes undetected, the attacker can make your website infect visitors with malware, so it is extremely important that breaches are detected and responded to promptly.

Frequently Asked Questions

What hash algorithm should I use?

DO use:

DO NOT use:

  • Fast cryptographic hash functions such as MD5, SHA1, SHA256, SHA512, RipeMD, WHIRLPOOL, SHA3, etc.
  • Insecure versions of crypt ($1$, $2$, $2x$, $3$).
  • Any algorithm that you designed yourself. Only use technology that is in the public domain and has been well-tested by experienced cryptographers.

Even though there are no cryptographic attacks on MD5 or SHA1 that make their hashes easier to crack, they are old and are widely considered (somewhat incorrectly) to be inadequate for password storage. So I don’t recommend using them. An exception to this rule is PBKDF2, which is frequently implemented using SHA1 as the underlying hash function.

How should I allow users to reset their password when they forget it?

It is my personal opinion that all password reset mechanisms in widespread use today are insecure. If you have high security requirements, such as an encryption service would, do not let the user reset their password.

Most websites use an email loop to authenticate users who have forgotten their password. To do this, generate a random single-use token that is strongly tied to the account. Include it in a password reset link sent to the user’s email address. When the user clicks a password reset link containing a valid token, prompt them for a new password. Be sure that the token is strongly tied to the user account so that an attacker can’t use a token sent to his own email address to reset a different user’s password.

The token must be set to expire in 15 minutes or after it is used, whichever comes first. It is also a good idea to expire any existing password tokens when the user logs in (they remembered their password) or requests another reset token. If a token doesn’t expire, it can be forever used to break into the user’s account. Email (SMTP) is a plain-text protocol, and there may be malicious routers on the internet recording email traffic. And, a user’s email account (including the reset link) may be compromised long after their password has been changed. Making the token expire as soon as possible reduces the user’s exposure to these attacks.

Attackers will be able to modify the tokens, so don’t store the user account information or timeout information in them. They should be an unpredictable random binary blob used only to identify a record in a database table.

Never send the user a new password over email. Remember to pick a new random salt when the user resets their password. Don’t re-use the one that was used to hash their old password.

What should I do if my user account database gets leaked/hacked?

Your first priority is to determine how the system was compromised and patch the vulnerability the attacker used to get in. If you do not have experience responding to breaches, I highly recommend hiring a third-party security firm.

It may be tempting to cover up the breach and hope nobody notices. However, trying to cover up a breach makes you look worse, because you’re putting your users at further risk by not informing them that their passwords and other personal information may be compromised. You must inform your users as soon as possible—even if you don’t yet fully understand what happened. Put a notice on the front page of your website that links to a page with more detailed information, and send a notice to each user by email if possible.

Explain to your users exactly how their passwords were protected—hopefully hashed with salt—and that even though they were protected with a salted hash, a malicious hacker can still run dictionary and brute force attacks on the hashes. Malicious hackers will use any passwords they find to try to login to a user’s account on a different website, hoping they used the same password on both websites. Inform your users of this risk and recommend that they change their password on any website or service where they used a similar password. Force them to change their password for your service the next time they log in. Most users will try to “change” their password to the original password to get around the forced change quickly. Use the current password hash to ensure that they cannot do this.

It is likely, even with salted slow hashes, that an attacker will be able to crack some of the weak passwords very quickly. To reduce the attacker’s window of opportunity to use these passwords, you should require, in addition to the current password, an email loop for authentication until the user has changed their password. See the previous question, “How should I allow users to reset their password when they forget it?” for tips on implementing email loop authentication.

Also tell your users what kind of personal information was stored on the website. If your database includes credit card numbers, you should instruct your users to look over their recent and future bills closely and cancel their credit card.

What should my password policy be? Should I enforce strong passwords?

If your service doesn’t have strict security requirements, then don’t limit your users. I recommend showing users information about the strength of their password as they type it, letting them decide how secure they want their password to be. If you have special security needs, enforce a minimum length of 12 characters and require at least two letters, two digits, and two symbols.

Do not force your users to change their password more often than once every six months, as doing so creates “user fatigue” and makes users less likely to choose good passwords. Instead, train users to change their password whenever they feel it has been compromised, and to never tell their password to anyone. If it is a business setting, encourage employees to use paid time to memorize and practice their password.

If an attacker has access to my database, can’t they just replace the hash of my password with their own hash and login?

Yes, but if someone has accesss to your database, they probably already have access to everything on your server, so they wouldn’t need to login to your account to get what they want. The purpose of password hashing (in the context of a website) is not to protect the website from being breached, but to protect the passwords if a breach does occur.

You can prevent hashes from being replaced during a SQL injection attack by connecting to the database with two users with different permissions. One for the ‘create account’ code and one for the ‘login’ code. The ‘create account’ code should be able to read and write to the user table, but the ‘login’ code should only be able to read.

Why do I have to use a special algorithm like HMAC? Why can’t I just append the password to the secret key?

Hash functions like MD5, SHA1, and SHA2 use the Merkle–Damgård construction, which makes them vulnerable to what are known as length extension attacks. This means that given a hash H(X), an attacker can find the value of H(pad(X) + Y), for any other string Y, without knowing X. pad(X) is the padding function used by the hash.

This means that given a hash H(key + message), an attacker can compute H(pad(key + message) + extension), without knowing the key. If the hash was being used as a message authentication code, using the key to prevent an attacker from being able to modify the message and replace it with a different valid hash, the system has failed, since the attacker now has a valid hash of message + extension.

It is not clear how an attacker could use this attack to crack a password hash quicker. However, because of the attack, it is considered bad practice to use a plain hash function for keyed hashing. A clever cryptographer may one day come up with a clever way to use these attacks to make cracking faster, so use HMAC.

Should the salt come before or after the password?

It doesn’t matter, but pick one and stick with it for interoperability’s sake. Having the salt come before the password seems to be more common.

Why does the hashing code on this page compare the hashes in “length-constant” time?

Comparing the hashes in “length-constant” time ensures that an attacker cannot extract the hash of a password in an on-line system using a timing attack, then crack it off-line.

The standard way to check if two sequences of bytes (strings) are the same is to compare the first byte, then the second, then the third, and so on. As soon as you find a byte that isn’t the same for both strings, you know they are different and can return a negative response immediately. If you make it through both strings without finding any bytes that differ, you know the strings are the same and can return a positive result. This means that comparing two strings can take a different amount of time depending on how much of the strings match.

For example, a standard comparison of the strings “xyzabc” and “abcxyz” would immediately see that the first character is different and wouldn’t bother to check the rest of the string. On the other hand, when the strings “aaaaaaaaaaB” and “aaaaaaaaaaZ” are compared, the comparison algorithm scans through the block of “a” before it determines the strings are unequal.

Suppose an attacker wants to break into an on-line system that rate limits authentication attempts to one attempt per second. Also suppose the attacker knows all of the parameters to the password hash (salt, hash type, etc), except for the hash and (obviously) the password. If the attacker can get a precise measurement of how long it takes the on-line system to compare the hash of the real password with the hash of a password the attacker provides, he can use the timing attack to extract part of the hash and crack it using an offline attack, bypassing the system’s rate limiting.

First, the attacker finds 256 strings whose hashes begin with every possible byte. He sends each string to the on-line system, recording the amount of time it takes the system to respond. The string that takes the longest will be the one whose hash’s first byte matches the real hash’s first byte. The attacker now knows the first byte, and can continue the attack in a similar manner on the second byte, then the third, and so on. Once the attacker knows enough of the hash, he can use his own hardware to crack it, without being rate limited by the system.

It might seem like it would be impossible to run a timing attack over a network. However, it has been done, and has been shown to be practical. That’s why the code on this page compares strings in a way that takes the same amount of time no matter how much of the strings match.

How does the SlowEquals code work?

The previous question explains why SlowEquals is necessary, this one explains how the code actually works.

1.     private static boolean slowEquals(byte[] a, byte[] b)
2.     {
3.         int diff = a.length ^ b.length;
4.         for(int i = 0; i < a.length && i < b.length; i++)
5.             diff |= a[i] ^ b[i];
6.         return diff == 0;
7.     }

The code uses the XOR “^” operator to compare integers for equality, instead of the “==” operator. The reason why is explained below. The result of XORing two integers will be zero if and only if they are exactly the same. This is because 0 XOR 0 = 0, 1 XOR 1 = 0, 0 XOR 1 = 1, 1 XOR 0 = 1. If we apply that to all the bits in both integers, the result will be zero only if all the bits matched.

So, in the first line, if a.length is equal to b.length, the diff variable will get a zero value, but if not, it will get some non-zero value. Next, we compare the bytes using XOR, and OR the result into diff. This will set diff to a non-zero value if the bytes differ. Because ORing never un-sets bits, the only way diff will be zero at the end of the loop is if it was zero before the loop began (a.length == b.length) and all of the bytes in the two arrays match (none of the XORs resulted in a non-zero value).

The reason we need to use XOR instead of the “==” operator to compare integers is that “==” is usually translated/compiled/interpreted as a branch. For example, the C code “diff &= a == b” might compile to the following x86 assembly:

MOV EAX, [A]
CMP [B], EAX
JZ equal
JMP done
equal:
AND [VALID], 1
done:
AND [VALID], 0

The branching makes the code execute in a different amount of time depending on the equality of the integers and the CPU’s internal branch prediction state.

The C code “diff |= a ^ b” should compile to something like the following, whose execution time does not depend on the equality of the integers:

MOV EAX, [A]
XOR EAX, [B]
OR [DIFF], EAX

Why bother hashing?

Your users are entering their password into your website. They are trusting you with their security. If your database gets hacked, and your users’ passwords are unprotected, then malicious hackers can use those passwords to compromise your users’ accounts on other websites and services (most people use the same password everywhere). It’s not just your security that’s at risk, it’s your users’. You are responsible for your users’ security.

Source: Defuse Security

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 322

Let’s Encrypt Wildcard SSL Certificate using CERTBOT

What is a Wildcard Certificate?

In computer networking, a wildcard certificate is a public key certificate which can be used with multiple subdomains of a domain. The principal use is for securing web sites with HTTPS, but there are also applications in many other fields. Compared with conventional certificates, a wildcard certificate can be cheaper and more convenient than a certificate for each subdomain.

Example:

A single wildcard certificate for https://*.secops.in will secure all these subdomains on the secops.in domain:

  • www.secops.in
  • amisafe.secops.in
  • login.secops.in

Instead of getting separate certificates for subdomains, you can use a single certificate for all main domains and subdomains and reduce cost.

Because the wildcard only covers one level of subdomains (the asterisk doesn’t match full stops), these domains would not be valid for the certificate:

  • test.login.secops.in

The “naked” domain is valid when added separately as a Subject Alternative Name (SubjectAltName):

  • secops.in

Who is LetsEncrypt!

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG).

Let’s Encrypt gives people the digital certificates they need in order to enable HTTPS (SSL/TLS) for websites, for free, in the most user-friendly way we can. We do this because we want to create a more secure and privacy-respecting Web.

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.

More detailed information about how the Let’s Encrypt CA works.

What is Certbot?

Certbot is an easy-to-use automatic client that fetches and deploys SSL/TLS certificates for your webserver. Certbot was developed by EFF and others as a client for Let’s Encrypt and was previously known as “the official Let’s Encrypt client” or “the Let’s Encrypt Python client.” Certbot will also work with any other CAs that support the ACME protocol.

While there are many other clients that implement the ACME protocol to fetch certificates, Certbot is the most extensive client and can automatically configure your webserver to start serving over HTTPS immediately. For Apache, it can also optionally automate security tasks such as tuning ciphersuites and enabling important security features such as HTTP → HTTPS redirects, OCSP stapling, HSTS, and upgrade-insecure-requests.

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship.

Certbot is the work of many authors, including a team of EFF staff and numerous open source contributors.

The Certbot privacy policy is described here.

Steps to generate Free Let’s Encrypt Wildcard SSL Certificate

Step#1: Install latest Certbot

$ wget https://dl.eff.org/certbot-auto
$ chmod a+x ./certbot-auto
$ sudo mv certbot-auto /usr/bin/certbot

Proceed to Step#2

Step#2: Generate the wildcard certificate with DNS Challenge (Eg. Domain: *.secops.in)

$ sudo certbot certonly \
--server https://acme-v02.api.letsencrypt.org/directory \
--manual --preferred-challenges dns \
-d *.secops.in -d secops.in

An important parameter to notice is --server https://acme-v02.api.letsencrypt.org/directory, which will instruct the Certbot client to use v2 of the Let’s Encrypt API (we need that for wildcard certs). Also notice 2 domains, one is a wildcard and the second one is for TLD(Top Level Domain) as the wildcard does not cover TLD’s or records in subdomains (explained in the first section)

The Certbot client will walk you through the process of registering an account, and it will instruct you on what to do to complete the challenges.

Proceed to Step#3

Step#3: Create a DNS TXT Record as instructed in the “secops.in” DNS Zone File

-------------------------------------------------------------------------------
Please deploy a DNS TXT record under the name
_acme-challenge.secops.in with the following value:
 
02HdxCLqTbjvjtO7mnLV1XXXXXXExamplEONlyabC
 
Before continuing, verify the record is deployed.
-------------------------------------------------------------------------------
Press Enter to Continue

Proceed to Step#4 for results

Step#4: Successful or Unsuccessful Messages

A Successful message would look like:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
 /etc/letsencrypt/live/secops.in/fullchain.pem
 Your key file has been saved at:
 /etc/letsencrypt/live/secops.in/privkey.pem
 Your cert will expire on 2018-07-22. To obtain a new or tweaked
 version of this certificate in the future, simply run certbot
 again. To non-interactively renew *all* of your certificates, run
 "certbot renew"
 - If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
 Donating to EFF: https://eff.org/donate-le

Your certificate and chain have been saved at: /etc/letsencrypt/live/secops.in/fullchain.pem
Your key file has been saved at: /etc/letsencrypt/live/secops.in/privkey.pem

An Unsuccessful message would look like:

Failed authorization procedure. secops.in (dns-01): urn:ietf:params:acme:error:unauthorized :: 
The client lacks sufficient authorization :: 
Incorrect TXT record "02HdxCLqTbjvjtO7mnxxxxXXxXxXHGZM2LlNXCSgOTQTzlp51ARngrBadcOnFigYvtv6SOg-BadcOnFigLts37Q0" 
found at _acme-challenge.secops.in

IMPORTANT NOTES:

 - The following errors were reported by the server:
   Domain: secops.in
   Type:   unauthorized
   Detail: Incorrect TXT record
   "02HdxCLqTbjvjtO7mnxxxxXXxXxXHGZM2LlNXCSgOTQTzlp51ARngrBadcOnFigYvtv6SOg-BadcOnFigLts37Q0"
   found at _acme-challenge.secops.in
   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.

In such cases, please recheck your DNS TXT Record using DNS lookup tools like dig, nslookup etc. as shown below:

$ dig TXT _acme-challenge.secops.in

; <<>> DiG 9.10.6 <<>> TXT _acme-challenge.secops.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10301
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:
;_acme-challenge.secops.in. IN TXT

;; ANSWER SECTION:
_acme-challenge.secops.in. 225 IN TXT "ECtBiSVn-qIufdfzHLTTlWVx09mWAv8MbzSZGFBbkQc"

;; Query time: 33 msec
;; SERVER: 208.67.220.220#53(208.67.220.220)
;; WHEN: Mon Apr 23 14:04:44 IST 2018
;; MSG SIZE  rcvd: 154

From the output above, it is clear that the TXT Record setup is not showing up the correct value as provided in Step#3. Hence re-run the tool and always ensure that the TXT Record should be setup as suggested for a Successful SSL Wildcard certificate generation.

Sources: Let’s Encrypt, Certbot

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 73

Secure your API – Best Practices

Why API’s need special attention?

As an increasing number of organizations provide API access to make their information available to a wider audience, securing that access is likewise of increasing importance. With the growing adoption of cloud, mobile, and hybrid environments the risks are increasing. Cyber threats and DDoS attacks are targeting enterprise applications as back-end systems become more accessible. In such situations, the API can be a significant point of vulnerability with its ability to offer programmatic access to external developers.

The previous wave of security breaches occurred when organizations opened access to their web applications. Defences, such as web application firewalls (WAF), were introduced to mitigate those security breaches. However, these defences are not effective against all API attacks, and you’ll need to focus on security of your API interfaces.

API Security

The predominant API interface is the REST API, which is based on HTTP protocol, and generally JSON formatted responses. Securing your API interfaces has much in common with web access security, but present additional challenges due to:

  • Exposure to a wider range of data
  • Direct access to the back-end server
  • Ability to download large volumes of data
  • Different usage patterns

This topic has been covered in several sites such as OWASP REST Security, and we will summarise the main challenges and defences for API security.

Authentication

The first defense is to identify the user, human or application, and determine that the user has permission to invoke your API. We are all familiar with entering a user ID and password, and possibly an additional identifier, to access a web interface. Corresponding for API, there is an API ID and an API key that are specified on the request to authenticate the user. The API key is a unique string generated for the application for each user of the API.

Access Control

Not all authenticated users will necessarily be authorized to access all provided APIs. For example, some users require access to retrieve (GET) information, but should not be able to change (PUT) any information. Using an access control framework, such as OAuth, you control the list of APIs that each specific API key can access.

To prevent a massive amount of API requests that can cause a DDoS attack or other misuse of the API service, apply a limit to the number of requests in a given time interval for each API. When the rate is exceeded, block access from the API key at least temporarily, and return the 429 (too many requests) HTTP error code.

Encryption

To protect the request in transit, require HTTPS for all access so that the messages are secured and encoded with TLS.

To protect the JSON formatted response, consider employing the JSON Web Token (JWT) standard. As stated in that site,

JWT is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Confidentiality

As a best practice, we do not want to expose any more information than is required. For that reason, be careful that error messages do not reveal too much information.

Application Layer Attacks

Some actors can target your system after they circumvent your secured access and data interface. For example, they could obtain authorization credentials via phishing. You’ll need to validate all data to prevent application layer attacks, such as:

  • Cross-Site Scripting – Malicious scripts are injected into one of the request parameters
  • Code Injection – Inject valid code to services, such as SQL (SQL injection) or XQuery, to open the interface to user control
  • Business Logic Exploits –Allows the attacker to circumvent the business rules
  • Parameter Pollution Attacks – Exploit the data sent in the API request by modifying the parameters of the API request

Apply strict INPUT VALIDATION as you would on any interface, including:

  • Restrict, where possible, parameter values to a whitelist of expected values
  • To facilitate a whitelist, have strong typing of input value
  • Validate posted structures data against a formal schema language, to restrict the content and structure
  • Blacklist risky content, such as SQL schema manipulation statements

Log all API access. This is essential to assist resolving any issues. From these logs you can easily monitor activity and potentially discover any patterns or excessive usage activity.

Security Framework

Consider using an existing API framework that has many of the security features built in. If you must develop your own, separate out the security portion from the application portion. In this way, security is uniformly built in and developers can focus on the application logic.

After all that don’t neglect to allocate resources to test security of the APIs. Make sure to test all the defences mentioned in this article.

 

Keep Defending !

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 50

Compile NGINX with NAXSI – Part#1

Why Re-Invent the Wheel ? ? ? ?

In this tutorial/walkthrough, I shall be providing you detailed instructions on how to compile and configure NAXSI on NGINX on Ubuntu 14.04 as the ubuntu standard repos have a very old version of NAXSI built NGINX  which I have personally found to be very buggy !


Schedule:

Part#1: Installation and basic configuration of NGINX-NAXSI

Part#2: Pumping the NGINX and NAXSI logs to ELASTICSEARCH

Part#3: Analyzing the logs and automated process of generating the false-positives and exclusions

Part#4: Conclusion


Requirements:

  • Ubuntu 14.04
  • GIT tools installed and setup on the server

Rest of the dependencies shall be provided below.


Step#1: Download Config Files

Ensure that you have the necessary tools handy that have been preconfigured to work with Ubuntu 14.04 from my GIT link: https://github.com/aarvee11/nginx_1.11.6-naxsi_latest

cd /tmp
git clone https://github.com/aarvee11/nginx_1.11.6-naxsi_latest.git

Step#2: Install Dependencies

As NAXSI and NGINX are being compiled from source, we will have to setup our server manually by installing all the dependencies below:

apt-get update
apt-get install automake gcc make pkg-config libtool g++ libfl-dev bison build-essential libbison-dev libyajl-dev liblmdb-dev libpcre3-dev libcurl4-openssl-dev libgeoip-dev libxml2-dev libyajl2 libxslt-dev openssl libssl-dev libperl-dev libgd2-xpm-dev

Step#3: Download and Setup

Run the following commands as given below. Ensure that the necessary permissions are given.

I have a habit of playing risky by running the commands as “sudo su” but it’s not really safe to play that risky on a production machine. Please follow your best standards to get things running on the server !

cd /usr/src 
wget https://github.com/nbs-system/naxsi/archive/master.zip 
wget http://nginx.org/download/nginx-1.11.6.tar.gz 
unzip master.zip 
tar -zxvf nginx-1.11.6.tar.gz 
git clone https://github.com/openresty/headers-more-nginx-module.git 
git clone https://github.com/flant/nginx-http-rdns.git 
cd /usr/src/nginx-1.11.6/ 
./configure --prefix=/etc/nginx \
 --add-module=/usr/src/naxsi/naxsi_src/ \
 --add-module=/usr/src/headers-more-nginx-module \
 --add-module=/usr/src/nginx-http-rdns/ \
 --with-cc-opt='-g -O2 -fstack-protector \
 --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' \
 --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' \
 --http-log-path=/var/log/nginx/access.log \
 --error-log-path=/var/log/nginx/error.log \
 --with-debug --with-pcre-jit --with-ipv6 \
 --with-http_ssl_module --with-http_stub_status_module \
 --with-http_realip_module \
 --with-http_addition_module \
 --with-http_dav_module \
 --with-http_geoip_module \
 --with-http_gzip_static_module \ --with-http_image_filter_module \
 --with-http_sub_module \
 --with-http_xslt_module \
 --with-mail \
 --with-mail_ssl_module \
 --http-client-body-temp-path=/tmp/client_body_temp \
 --http-proxy-temp-path=/tmp/proxy_temp \
 --http-fastcgi-temp-path=/tmp/fastcgi_temp \
 --http-uwsgi-temp-path=/tmp/uwsgi_temp \
 --http-scgi-temp-path=/tmp/scgi_temp
make
make install

Step#4: Create/Copy Config files

Using the pre-configured steps provided in Step#1, we shall be now copying the files over to the correct locations as shown below:

cp /tmp/nginx_1.11.6-naxsi_latest/etc/init.d/nginx /etc/init.d/nginx
cp -r /tmp/nginx_1.11.6-naxsi_latest/nginx/conf/* /etc/nginx/conf/
ln -s /etc/nginx/sbin/nginx /usr/sbin/nginx
mkdir /etc/nginx/conf/sites-available
mkdir /etc/nginx/conf/sites-enabled
cp /usr/src/naxsi-master/naxsi_config/naxsi_core.rules /etc/nginx/conf/
mkdir /etc/nginx/conf/naxsi-whitelist/
touch /etc/nginx/conf/whitelist.conf

 Step#5: Configure your website on NGINX-NAXSI

You can use the sample configuration that can be found under “/tmp/nginx_1.11.6-naxsi_latest/nginx/sites-available” directory that has already been copied in the above step.

Edit the config file to match your requirements such as Site Name, Upstream IP/ Server etc. The sample has been provided below for quick reference:

server {
  listen 80 default_server;
  #listen [::]:80 default_server ipv6only=on;
  #root /var/www/nginx/html;
  #index index.html index.htm;
  # Make site accessible from http://localhost/
  server_name *.example.com; # Replace it with your website hostname. * is wildcard.
  set $naxsi_extensive_log 1;
  location / {
    # Uncomment to enable naxsi on this location
    include /etc/nginx/conf/naxsi.rules;
    include /etc/nginx/conf/naxsi-whitelist/*.rules;
    #try_files $uri $uri/ @rewrite;
    proxy_pass http://127.0.1.80:8000;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Connection close;
    proxy_set_header X-Real-IP $remote_addr;
    # Comment the below line if there is already an upstream reverse proxy server that is setting the actual client IP
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Once the configuration is complete, run the command below to create a symlink of the config file in sites-enabled directory so that NGINX can be pick it up

cd /etc/nginx/conf/sites-enabled
ls -s ../sites-available/<virtual-host-config-file> .

Conclusion:

With all the above steps, we are now ready to deploy our Web Application in a Alert-Only mode which start scanning our incoming web requests and starts generating events that trigger a lot of events.

In the upcoming second part, I shall be providing detailed steps on how to setup the logging for NGINX and NAXSI using Elasticsearch.


As always I say:

Keep Defending !

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 176

The Fight with Fake Facebook Bots !

FacebookIPLIsts

Daily Checked and Updated if Facebook modifies their list.

Am sure most of the web admins are tired with the FAKE Facebook Bots that set their User-Agents manually to facebook bots impersonating it and keep scraping the data

Hence here are the Lists for  IPv4 and IPv6 addresses in order to whitelist the real Facebook IP Addresses as per Facebook’s Documentation.

Please note that the list shall be checked and updated daily at 12:00PM IST (GMT + 5:30) if there is a change in the IP Lists, else you shall see how old the IP Addresses have not been changed from Facebook starting from the date of 1st check in on this Repo.

Link: GitHub Repo

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 41

Secure SDLC

Introduction:

SDLC stands for Software Development LifeCycle.

WHY do we need to Secure it when it is already a global standard across any development cycle ? ? ?

Remember the quote?

A stitch in time, saves nine ! ! !

S-SDLC is a preventive process to avoid any security mishaps once a product is out there in the wild. The GOAL is to keep the application SAFE from breaches/attacks/hacks. Fixing a security vulnerability in a product which is already released is always a costly affair in terms of Brand, Revenue & Time!

Phase#1: SCOPING:

Gather as much information as possible in this phase regarding the feature in question. This helps us understand the business functionality much better so that no security measure in place shall affect the Business Requirements. Ultimately its our responsibility to deliver what Business wants. Do involve the key people in this phase like the Product Manager, Senior Management or even Application Developers

Phase#2: DESIGN:

In the Design phase, it is our responsibility to provide a flow which is secure by design. Think of the logical flow based attacks which purely attack the core logic of the feature. Ensure that sufficient checks can be placed at multiple places and the source of truth in core logics should not completely rely on the client side logic instead has some validation in place. For example, in a checkout flow for an e-commerce application, ensure that the client side cart value is not taken as the source of truth instead always keep that as a validation step but the source of truth should be from the merchant’s server for a payment gateway!

Phase#3: CODING:

Ensure that the best coding standards and practices are followed. As well perform static analysis of the code should be done at this phase and not at the next one so that the developers are not going through multiple cycles. For example, ensure that no passwords are being hardcoded in the code, instead define a strategy to pickup sensitive keys in runtime while building your application at the time of deployment and is always a part of the runtime memory. So even if a breach happens on the application server, the credentials/ keys are safe. Ensure that proper sanitizing wrappers are in place and there is always a service layer between your frontend application and critical infrastructure like Database which sanitizes/parametrizes the query inputs before executing them on the database server.

Please note that developer’s time is precious and this phase shall ensure that their time is not getting wasted in multiple redundant cycles!

Phase#4: TESTING:

The test cases for Security Testing should be pre-determined so that the RED Team once receives the build after QA Sanity, should start off with manual testing against the OWASP Top 10 vulnerabilities which ever relates to the feature. Ensure that there are proper regression tests also in place if it is an enhancement to an existing feature. Logical Flow testing is one of the most important tests in this phase and ensure that the flow developed is 100% complying to the flow proposed in Phase#2: Design. A proper VAPT should be done end-to-end and the results should be recorded in your bug/task tracking tools like JIRA for future regression testing references.

Always prioritize the Security Bugs w.r.t. the impact expected out of the issue identified the right way and ensure that the impact description is well understood by the stakeholders! Of course the bug bounty platforms have helped the bounty hunters and security researchers practice this step by providing a proper flow on their platforms. Try raising a Security Bug in BugCrowd!

If bugs are found, then get them fixed in a timely manner and DO ENSURE THAT YOU HAVE A SOLUTION for the bug raised beforehand because ultimately you are the who found the security bug and hence propose a secure solution regarding the same. For best practices and evasion techniques, please refer to the OWASP Top 10 Cheat Sheet and refer the same as a good standard while addressing vulnerabilities and requesting a fix.

Phase#5: DEPLOY:

This is the last phase in which we need to ensure that the environment in which the feature is getting deployed is secured. Ensure that the server has been hardened and the version being used is vulnerability free! As well do perform network assessment where and when required and try to avoid non-standard ports. Ensure that proper TLS Implementation is in place. A Security Architect with the right DevOps skills can help you in achieving the best results.

Once we are able to implement the above process, we can ensure that our applications that are getting deployed are well protected right from design itself. But remember:

Security is a Process, not a Product!

Ensure that you have sufficient monitoring and alerts in place. A proper Incident Response Team is well trained on the new feature and how to identify security incidents and attack attempts and how to identify them. Design a process on how to handle live attacks and ensure that there are sufficient tools and visibility provided to the monitoring teams like SOC

Keep Defending !

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 30

Generate BKS Certificates for SSL Pinning in Android Apps

 

  1. Ensure that the Certificate against which we are generating the certificate is already loaded on the web server
  2. Generate the PKCS12 format Certificate from the PEM Certs using the command below:
    
    
    openssl pkcs12 -export -out cert-complete.pfx -inkey cert-key.pem -in cert-leaf.pem -certfile cert-chain.pem
    
    Enter Export Password:
    
    Verifying - Enter Export Password:
  3. Ensure that a strong password is set atleast of 12 Characters (alphanumeric + Special Chars)
  4. Download the tool “Portecle” from the Website http://portecle.sourceforge.net/
  5. Run the Tool
  6. Load the PKCS12 SSL certificate that was generated in the Step#2
  7. Enter the password that was set
  8. You shall see the certificate getting listed
  9. Highlight the entry, navigate to “Tools → Change Keystore Type → BKS”
  10. Enter the same password again
  11. If the Password is correct, you shall see the following message:
  12. Now Select the same entry, and Select the option “File → Save Keystore As”
  13. Save the file on the desired location after setting a password if prompted. ENSURE THAT THE EXTENSION OF THE FILE IS .bks
Share it on your Social Network !

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 123

Build your own DoS/DDoS/Bot Mitigation Gear & Fighting Fake Google/Yahoo/Bing/Apple Bots

HAPROXY(Rate Limiter + BadUser Detection)

NGINX(rDNS + GeoIP)

Platforms used are completely Open Source. Please refer to their respective documentation from the respective links below:

NGINX: https://nginx.org/en/docs/

HAPROXY: http://www.haproxy.org/#docs

In this article I shall be discussing on how can we build our own Service Protection Layer which shall include modules to protect our web application against DoS, DDoS, Bad Bots based on User-Agents and a Web Application Firewall.

Any additional layer in our web application architecture, when we introduce an extra service layer, the chances of increase in latency is high. Hence for best results always follow the below flow when you are building a Service Protection Layer in front of your firewall.

HAPROXY ———-StaticContent————> Web Application

HAPROXY —–DynamicContent —–> NGINX ——> Web Application

Using HAPROXY we shall perform:

Rate Limiting

User-Agent Detection (Bot Mitigation)

Using NGINX we shall perform:

Fake Google/Yahoo/Bing Bots

GeoBlock

Please download the signatures from https://github.com/aarvee11/webclient-detection or feel free to write your own !

The art of detecting and blocking attacks is all about finding Signatures, Patterns and understanding the Attack Vector!

Rate Limiting: Rate Limiting is a technique used to limit a particular client from abusing our platform with over-whelming requests ensuring that the resources on the web application server are always available for the legitimate users. Rate Limiting for a web application should be gauged for 4 different parameters:

  1. Number of TCP Connections per client IP
  2. Rate of Incoming TCP Connections from a Client IP
  3. Rate of Incoming HTTP Request from a HTTP Client
  4. Rate of HTTP Errors getting generated by a HTTP Client

Note: Client at application/HTTP layer can be identified based on IP, Cookie, Parameter, IP+Cookie, IP+Cookie+User-Agent or any Other HTTP Header like Authorization etc.

When you are noticing a very huge traffic getting caught at Rate Controls, prepare yourself to start scaling your service protection later infra using techniques like AWS Auto-Scaling based on Network-In and Network-Out parameters at the instance level

Bad-bots: Most of the bots always leave a signature behind. User-Agent is one of the key-headers which detected the same. We shall be using the User-Agent comparison against a known bad bots list that can be downloaded from the git mentioned above

Fake Google/Yahoo/Bing/Applebots: Most of the web admins do not set any security controls on any of the friendly bots like mentioned above. Reason? Definitely if we block the above bots, our SEO, Visibility and Functionality also can take a hit as these are friendly bots and not bad-bots. How do they identify themselves? Connecting IP Address and User-Agent. All the above service providers clearly declare that they cannot publish their IP Address range as it keeps changing very dynamically and the only option left is to check the User-Agent which is consistent. But the challenge for a security administrator is that the User-Agent field is pretty much configurable and hence any client out there can start using the well known friendly bots User-Agent Signatures and cause harm to our application. Hence I adopted to the method of performing a reverse lookup approach for the connecting IP Address as suggested by Google! I am using the rDNS module to perform the connecting IP Address based on their User-Agents and when found they are fake are getting rejected automatically. Cool isn’t it?

GeoIP: ICANN distributes the IP Addresses all over the world and hence based on the same information, we can determine the country from which a client is getting connected and accessing our web application. What if we have been seeing a good amount of attacks from a specific country? As we cant keep mapping the IP Address, we can always opt for a Geo Blocking method by making NGINX Geo Aware using the GeoIP Module. Once done, we can choose which countries to be allowed explicitly and block others or vice-versa!

For the ease of administrators and not to keep this article very theoretical, I have provided config snippets below:

For any further queries please feel free to InMail me!

Happy Hunting folks!

HAPROXY Config Snippet:

In the Frontend Section:

=======================================

# Capture the Rate Limit Headers and Actual Client IP in Logs

 capture request header X-Haproxy-ACL len 256

 capture request header X-Bad-User len 64

 capture request header X-Forwarded-For len 64

 capture request header User-Agent len 256

 capture request header Host len 32

 capture request header X-Geo len 8

 capture request header X-Google-Bot len 8

 capture request header Referer len 256

 # Do Not Rate Control Google based bots. Fake Google bot attacks shall be filtered at NGINX

 acl google-ua hdr(user-agent) -i -f <path-to-webclient-detection-directory>/google-ua.lst

 http-request add-header X-Google-Bot %[req.fhdr(X-Google-Bot,-1)]Goog-YES, if google-ua

 # Define a table that will store IPs associated with counter

 stick-table type ip size 500k expire 30s store conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)

 # Enable tracking of src IP in the sticktable - Secops

 tcp-request content track-sc0 src

=======================================

# RATE LIMITING RULES

acl sensitive-urls path -i /api/app/login path -i /api/app/otp path -i /api/app/forgotpass

 # Reject the new connection if the client already has 100 opened

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-over-100-active-connections, if { src_conn_cur ge 100 }

 # Reject the new connection if the client has opened more than 65 TCP connections in 3 seconds

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-over-65-connections-in-3seconds, if { src_conn_rate ge 65 }

 # Reject the connection if the client has passed the HTTP error rate (10 HTTP Errors in 10 Seconds)

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-10-errors-in-10-seconds, if { sc0_http_err_rate() gt 10 }

 # Reject the connection if the client has passed the HTTP request rate (70 HTTP Requests in 10 Seconds)

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-70-HTTPRequests-in-10-seconds, if { sc0_http_req_rate() gt 70 } !listing

 # Reject Requests that are hitting more than 20 requests in 10 seconds interval for restaurant listing - Sensitive URLs

 http-request add-header X-Haproxy-ACL %[req.fhdr(X-Haproxy-ACL,-1)]Rate-Limit-Listing, if { sc0_http_req_rate() gt 30 } sensitive-urls

=======================================

 # FLAGGING BAD-BOTS @ HAPROXY LEVEL

acl badbots hdr_sub(user-agent) -f <path-to-webclient-detection-directory>/bad-bots.lst

 acl nullua hdr_len(user-agent) 0

 acl availua hdr(user-agent) -m found

 acl ua-regex hdr_reg(user-agent) -i .+?[/\s][\d.]+

 acl tornodes src -f <path-to-webclient-detection-directory>/tor-exit-nodes.lst

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]BadBot, if badbots

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]No-UA, if nullua !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]No-UA, if !availua !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Invalid-UA, if !ua-regex !google-ua

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Tor-Node, if tornodes

 http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Trace-Method, if http_trace

======================================= 

# FLAGGING SCANNERS @ HAPROXY LEVEL

acl scanner hdr_sub(user-agent) -f <path-to-webclient-detection-directory>/scanners.lst

http-request add-header X-Bad-User %[req.fhdr(X-Bad-User,-1)]Scanner, if scanner

 NGINX Config Snippet:

location / {

        #Enable Reverse DNS for all the Googlebots

 resolver 8.8.8.8;

 rdns_allow "(.*)google(.*)";

 rdns_allow "(.*)crawl.yahoo.net(.*)";

 rdns_allow "(.*)search.msn.com(.*)";

 rdns_allow "(.*)applebot.apple.com(.*)";

 rdns_deny "^(?!(.*)google(.*))|^(?!(.*)crawl.yahoo.net(.*))|^(?!(.*)search.msn.com(.*))|^(?!(.*)applebot.apple.com(.*))";

        if ($http_user_agent ~* "(.*)[Gg]oogle(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Ss]lurp(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Bb]ing(.*)") {

            rdns on;

        }

        if ($http_user_agent ~* "(.*)[Aa]pplebot(.*)") {

            rdns on;

        }

        # Set a Variable to take an action based on the result

        if ($rdns_hostname ~* ((.*)googlebot\.com)) {

            set $valid_bot 1;

        }

        if ($rdns_hostname ~* ((.*)google\.com)) {

            set $valid_bot 1;

        }

        # Return 403 is not Google

        if ($valid_bot = "0") {

            return 403;

        }

 #Geo Block Section

        if ($allowed_country = yes) {

            set $exclusions 1;

        }

        if ($exclusions = "0") {

            return 451;

        }

 

Share it on your Social Network !

Facebook twittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 319

10 Server-Side Configurations to Secure your HTTP Web Application !

1. CORS | Access-Control-Allow-Origin Policy:

CORS stands for Cross-Origin Resource Sharing, is a technique for relaxing the same-origin policy, allowing Javascript on a web page to consume a REST API served from a different origin. It is common for a modern website to include content delivered from another domain. Lets say there exists an API for POST, PUT or DELETE requests on your site. Then it’s likely that your server will send a CORS response header that will look something like this:

Access-Control-Allow-Origin: http://www.mytrustedsite.com

This means that your server will happily deliver content to “http://www.mytrustedsite.com“. Furthermore, and this is the significant part, if your API provides for it, “http://www.mytrustedsite.com” can add, alter or delete content on your database or web-server. That’s fine, because you designed your API to allow such transactions and you’ve explicitly given “http://www.mytrustedsite.com” permission to use your API.

However, if you’ve not set this explicitly, you might send the following header value:

Access-Control-Allow-Origin: *

The problem with this is that you are now allowing any website to interact with your API. If there is a website that has many vulnerabilities existing on it, and if that site starts leveraging your site content, all the Users who are accessing the site with vulnerabilities are posing a threat to your content which is being leveraged through it.

For further countermeasures, please refer to this link.

2. X-Permitted-Cross-Domain-Policies:

A cross-domain policy file is an XML document that grants a web client, such as Adobe Flash Player or Adobe Acrobat (though not necessarily limited to these), permission to handle data across domains. When clients request content hosted on a particular source domain and that content make requests directed towards a domain other than its own, the remote domain needs to host a cross-domain policy file that grants access to the source domain, allowing the client to continue the transaction. Normally a meta-policy is declared in the master policy file, but for those who can’t write to the root directory, they can also declare a meta-policy using the X-Permitted-Cross-Domain-Policies HTTP response header. Below are the available values and a short description about each value

When you don’t want to allow content producers to embed your work in their content, ensure you have no crossdomain.xml files within your website’s directory structure. You should also send the following header with each response from your web-server:

X-Permitted-Cross-Domain-Policies: none

3. MIME Sniffing:

MIME Sniffing, also known as Content Sniffing is a method using which the content rendering User-Agent shall detect the type of content being served by the server and auto-correct the same if not declared correctly. The same can happen if the metadata has not been configured the right way on the web server. But as it adds a ease of use, it also opens up security vulnerabilities using which an attacker can craft the payload and get scripts executed at the client’s end ending up in a XSS Attack.

Hence it is a very good practice to disable client side MIME sniffing and the same can be achieved by setting the following response header

X-Content-Type-Options: nosniff

Please refer to one of the references below:

https://blog.mozilla.org/security/2016/08/26/mitigating-mime-confusion-attacks-in-firefox

Before we go ahead with the next 2 points, I would like to introduce a concept called as ” Reconnaissance” which means gathering as much data as possible about the target before launching your attacks, in order to make the attack vectors more focused and effective. Reconnaissance is the Phase#1 for any ethical hacker out of a total of 5 phases.

4. Server Identifier:

The HTTP Response Header “server” provides us the identity of the web server which is handling the current web request. In a Ops point of view, it makes a lot easier for an Ops Engineer to know the same in order to debug/troubleshoot in case of any issues. Think of the attacker’s perspective. As the attacker now knows the type of server, he can now search the known CVE ids and already known and discovered vulnerabilities which can be launched over the target making the attacker’s job easier.

5. X-Powered-By:

The HTTP Response Header “X-Powered-By” provides us specifies the technology (e.g. ASP.NET, PHP, JBoss) supporting the web application (version details are often in X-Runtime, X-Version, or X-AspNet-Version). Same as said above, lets stick to the concept below:

DO NOT GIVE AWAY THE INFORMATION ABOUT YOUR PRECIOUS SERVER STACK INFO. LET IT BE THE SECRET SAUCE OF YOUR WEB SERVICE APPLICATION.

6. Cross-site scripting (XSS) filter:

Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.

An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site. These scripts can even rewrite the content of the HTML page.

For further classification of the XSS Types please refer to Types of Cross-Site Scripting

The Response Header below enables the XSS Filter on your browser ensuring that any suspected XSS Attack is mitigated at the browser level itself

X-XSS-Protection: 1; mode=block

Valid Values for the header and a short description has been provided below:

7. Clickjacking:

Clickjacking, also known as a “UI redress attack”, is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the the top level page. Thus, the attacker is “hijacking” clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker.

The Response Header below prohibits Clickjacking by declaring a policy communicated from a host to the client browser on whether the browser must not display the transmitted content in frames of other web pages.

X-Frame-Options: deny

Valid Values for the header and a short description has been provided below:

8. Content Security Policy:

Content Security Policy (CSP) is an effective “defense in depth” technique to be used against content injection attacks. It is a declarative policy that informs the user agent what are valid sources to load the content from.

Since, it was introduced in Firefox version 4 by Mozilla and now supported by Firefox 23+, Chrome 25+ and Opera 19+. It has been adopted as a standard, and now grown in adoption and capabilities.

Enabling CSP can sometimes even break your existing vulnerabilities and hence a web security administrator needs to carefully examine the same before implementing the same over an existing and running application. If the same can be enabled for an App which is not yet published right from the start, it is the best approach indeed.

Below is an example for a CSP Header:

Content-Security-Policy: default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';

Directives (Source: OWASP LINK)

The following is a listing of directives, and a brief description.

CSP 1.0 Spec

  • connect-src (d) – restricts which URLs the protected resource can load using script interfaces. (e.g. send() method of an XMLHttpRequest object)
  • font-src (d) – restricts from where the protected resource can load fonts
  • img-src (d) – restricts from where the protected resource can load images
  • media-src (d) – restricts from where the protected resource can load video, audio, and associated text tracks
  • object-src (d) – restricts from where the protected resource can load plugins
  • script-src (d) – restricts which scripts the protected resource can execute. Additional restrictions against, inline scripts, and eval. Additional directives in CSP2 for hash and nonce support
  • style-src (d) – restricts which styles the user may applies to the protected resource. Additional restrictions against inline and eval.
  • default-src – Covers any directive with (d)
  • frame-src – restricts from where the protected resource can embed frames. Note, deprecated in CSP2
  • report-uri – specifies a URL to which the user agent sends reports about policy violation
  • sandbox – specifies an HTML sandbox policy that the user agent applies to the protected resource. Optional in 1.0

New in CSP2

  • form-action – retricts which URLs can be used as the action of HTML form elements
  • frame-ancestors – indicates whether the user agent should allow embedding the resource using a frame, iframe, object, embed or applet element, or equivalent functionality in non-HTML resources
  • plugin-types – restricts the set of plugins that can be invoked by the protected resource by limiting the types of resources that can be embedded
  • base-uri – restricts the URLs that can be used to specify the document base URL
  • child-src (d) – governs the creation of nested browsing contexts as well as Worker execution contexts

9. Deploy TLS:

TLS commonly even known as SSL is the predecessor of the the later. SSL/TLS are protocols used for encrypting information between two points. It is usually between server and client, but there are times when server to server and client to client encryption are needed. There are multiple advantages of enabling TLS Encryption for your web services. Some of them are:

  • Introduces strong encryption of all data between the client and server
  • Protects against packet sniffing
  • Protects against man-in-the-middle attacks
  • When your users see that padlock in their web-browsers, they know that their connection is with your website, a trusted source.

TLS is aa very deep topic which has already published by me in a different blog and you can refer to the same from the LINK

In order to check your TLS/SSL Settings which have been tuned on your web services, please use the tool testssl.sh which is available to download and runs on any *nix based systems. As well there is one more free tool which can analyze and give you a Rating + Suggestions to improve the Site SSL/TLS Security Levels. Do look at the free online tool from Qualys SSL Labs: SSL Server Test. Always try aiming for an A+ Grade!

10. Strict Transport Security:

HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797. A server implements an HSTS policy by supplying a header (Strict-Transport-Security) over an HTTPS connection (HSTS headers over HTTP are ignored).

The following Response Header ensures the same:

Strict-Transport-Security: max-age=31536000 ; includeSubDomains

Values

With all the measures above, it does not mean that your web service can be completely invincible. Always have a healthy Security Testing Lifecycle in your SDLC, Have a dedicated Web Application Firewall(shall be discussing my views in a separate article) which is being well maintained with the latest rulesets and adequate logging and alerting in place and most importantly have a highly skilled Security Engineering Team which shall have a combination of both Offensive and Defensive Security Engineers

Keep Defending!

Share it on your Social Network !

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 114

What is SSL – A Deep Dive

ssl_handshake_with_two_way_authentication_with_certificates

SSL/TLS are protocols used for encrypting information between two points. It is usually between server and client, but there are times when server to server and client to client encryption are needed. For the purpose of this blog, I will focus only on the negotiation between server and client.

For SSL/TLS negotiation to take place, the system administrator must prepare the minimum of 2 files: Private Key and Certificate. When requesting from a Certificate Authority such as Symantec Trust Services, an additional file must be created. This file is called Certificate Signing Request, generated from the Private Key. The process for generating the files are dependent on the software that will be using the files for encryption.

Additional certificates called Intermediate Certificate Authority Certificates and Certificate Authority Root Certificates may need to be installed on the server. This is again server software dependent. There is usually no need to install the Intermediate and Root CA files on the client applications or browsers.

The following is a standard SSL handshake when RSA key exchange algorithm is used: (Please refer to the diagram Source: Wikimedia)

Client Hello
– Information that the server needs to communicate with the client using SSL.
– Including SSL version number, cipher settings, session-specific data.

Server Hello
– Information that the client needs to communicate with the server using SSL.
– Including SSL version number, cipher settings, session-specific data.
– Including Server’s Certificate (Public Key)

Authentication and Pre-Master Secret
– Client authenticates the server certificate. (e.g. Common Name / Date / Issuer)
– Client (depending on the cipher) creates the pre-master secret for the session,
– Encrypts with the server’s public key and sends the encrypted pre-master secret to the server.

Decryption and Master Secret
– Server uses its private key to decrypt the pre-master secret,
– Both Server and Client perform steps to generate the master secret with the agreed cipher.

Generate Session Keys
– Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session

Encryption with Session Key
– Both client and server exchange messages to inform that future messages will be encrypted.

Source: Wikipedia

Tools such as OpenSSL can be used check the SSL/TLS negotiations. Try running the command below on a Linux/Mac/Windows Machine which has the latest OpenSSL Version installed on it and see the results:

“openssl s_client -connect amisafe.secops.in:443 -ssl3”

“openssl s_client -connect amisafe.secops.in:443 -tls1”

“openssl s_client -connect amisafe.secops.in:443 -tls1.1”

“openssl s_client -connect amisafe.secops.in:443 -tls1.2”

 

 

Share it on your Social Network !

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hits: 141