OWASP API Security Top 10 – 2019(1st Version)

OWASP API Security Top 10 – 2019

The FIRST Edition from OWASP for API Security

What is API Security?

A foundational element of innovation in today’s app-driven world is the API. From banks, retail and transportation to IoT, autonomous vehicles and smart cities, APIs are a critical part of modern mobile, SaaS and web applications and can be found in customer-facing, partner-facing and internal applications. By nature, APIs expose application logic and sensitive data such as Personally Identifiable Information (PII) and because of this have increasingly become a target for attackers. Without secure APIs, rapid innovation would be impossible.

API Security focuses on strategies and solutions to understand and mitigate the unique vulnerabilities and security risks of Application Programming Interfaces (APIs).

API Security Risk

The OWASP Risk Rating Methodology was used to do the risk analysis. The table below summarizes the terminology associated with the risk score.

Threat AgentsExploitabilityWeakness
Prevalence
Weakness
Detectability
Technical
Impact
Business
Impacts
API SpecificEasy: 3 Widespread 3 Easy 3 Severe 3Business
Specific
Average: 2 Common 2 Average 2 Moderate 2
Difficult: 1 Difficult 1 Difficult 1 Minor 1
Note:

This approach does not take the likelihood of the threat agent into account. Nor does it account for any of the various technical details associated with your particular application. Any of these factors could significantly affect the overall likelihood of an attacker finding and exploiting a particular vulnerability. This rating does not take into account the actual impact on your business. Your organization will have to decide how much security risk from applications and APIs the organization is willing to accept given your culture, industry, and regulatory environment. The purpose of the OWASP API Security Top 10 is not to do this risk analysis for you

Brief summary of each threat:

A1:2019 – Broken Object Level Authorization:

APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface Level Access Control issue. Object level authorization checks should be considered in every function that accesses a data source using an input from the user.

A2:2019 – Broken Authentication:

Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or to exploit implementation flaws to assume other user’s identities temporarily or permanently. Compromising system’s ability to identify the client/user, compromises API security overall.

A3:2019 – Excessive Data Exposure:

Looking forward to generic implementations, developers tend to expose all object properties without considering their individual sensitivity, relying on clients to perform the data filtering before displaying it to the user. Without controlling the client’s state, servers receive more-and-more filters which can be abused to gain access to sensitive data.

A4:2019 – Lack of Resources & Rate Limiting:

Quite often, APIs do not impose any restrictions on the size or number of resources that can be requested by the client/user. Not only can this impact the API server performance, leading to Denial of Service (DoS), but also leaves the door open to authentication flaws such as brute force.

A5:2019 – Broken Function Level Authorization:

Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions, tend to lead to authorization flaws. By exploiting these issues, attackers gain access to other users’ resources and/or administrative functions.

A6:2019 – Mass Assignment:

Binding client provided data (e.g., JSON) to data models, without proper properties filtering based on a whitelist, usually lead to Mass Assignment. Either guessing objects properties, exploring other API endpoints, reading the documentation, or providing additional object properties in request payloads, allows attackers to modify object properties they are not supposed to.

A7:2019 – Security Misconfiguration:

Security misconfiguration is commonly a result of insecure default configurations, incomplete or ad-hoc configurations, open cloud storage, misconfigured HTTP headers, unnecessary HTTP methods, permissive Cross-Origin resource sharing (CORS), and verbose error messages containing sensitive information.

A8:2019 – Injection:

Injection flaws, such as SQL, NoSQL, Command Injection, etc. occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s malicious data can trick the interpreter into executing unintended commands or accessing data without proper authorization.

A9:2019 – Improper Assets Management:

APIs tend to expose more endpoints than traditional web applications, making proper and updated documentation highly important. Proper hosts and deployed API versions inventory also play an important role to mitigate issues such as deprecated API versions and exposed debug endpoints.

A10:2019 – Insufficient Logging & Monitoring:

Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, extract, or destroy data. Most breach studies demonstrate the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.

Further Reading and References:

Please find the complete PDF  or visit the OWASP Website.

Hits: 697

HTTP/2 Vulnerable to 8 DoS Attacks | CVE IDs declared

Overview:

Netflix has discovered several resource exhaustion vectors affecting a variety of third-party HTTP/2 implementations. These attack vectors can be used to launch DoS attacks against servers that support HTTP/2 communication.

Netflix worked with Google and CERT/CC to coordinate disclosure to the Internet community.

Today, a number of vendors have announced patches to correct this suboptimal behaviour. While we haven’t detected these vulnerabilities in our open source packages, we are issuing this security advisory to document our findings and to further assist the Internet security community in remediating these issues.

Impact:

There are three broad areas of information security: confidentiality (information can’t be read by unauthorised people), integrity (information can’t be changed by unauthorised people), and availability (information and systems are available when you want them). All of the changes announced today are in the “availability” category. These HTTP/2 vulnerabilities do not allow an attacker to leak or modify information.

Rather, they allow a small number of low bandwidth malicious sessions to prevent connection participants from doing additional work. These attacks are likely to exhaust resources such that other connections or processes on the same machine may also be impacted or crash.

The Weaknesses:

HTTP/2 (defined in RFCs 7540 and 7541) represents a significant change from HTTP/1.1. There are several new capabilities, including header compression and multiplexing of data from multiple streams, which make this attractive to the user community. To support these new features, HTTP/2 has grown to encompass some of the complexity of a Layer 3 transport protocol:

  • Data is now carried in binary frames;
  • There are both per-connection and per-stream windows that define how much data can be sent;
  • There are several ICMP-like control messages (ping, reset, and settings frames, for example) which operate at the HTTP/2 connection layer; and,
  • This is a fairly robust concept of stream prioritization.

While this added complexity enables some exciting new features, it also raises implementation questions. When implementations run on the internet and are exposed to malicious users, implementers may wonder:

  • Should I limit any of the control messages?
  • How do I implement the priority queueing scheme in a computationally efficient way?
  • How do I implement the flow-control algorithms in a computationally efficient way?
  • How could an attacker manipulate the flow-control algorithm at the HTTP/2 layer to cause unintended results? (And, can they manipulate the flow-control algorithms at both the HTTP/2 and TCP layers together to cause unintended results?)

The Security Considerations section of RFC 7540 (see Section 10.5) addresses some of this in a general way. However, unlike the expected “normal” behavior—which is well-documented and which implementations seem to follow very closely—the algorithms and mechanisms for detecting and mitigating “abnormal” behavior are significantly more vague and left as an exercise for the implementer. From a review of various software packages, it appears that this has led to a variety of implementations with a variety of good ideas, but also some weaknesses.

Why does this matter?

Most of these attacks work at the HTTP/2 transport layer. As illustrated in the diagram below, this layer sits above the TLS transport, but below the concept of a request. In fact, many of these attacks involve either 0 or 1 requests.

http2 diagram

Since the early days of HTTP, tooling has been oriented around requests: logs often indicate requests (rather than connections); rate-limiting may occur at the request level; and, traffic controls may be triggered by requests.

By contrast, there is not as much tooling that looks at HTTP/2 connections to log, rate-limit, and trigger remediation based on a client’s behavior at the HTTP/2 connection layer. Therefore, organizations may find it more difficult to discover and block malicious HTTP/2 connections and may need to add additional tooling to handle these situations.

These attack vectors allow a remote attacker to consume excessive system resources. Some are efficient enough that a single end-system could potentially cause havoc on multiple servers. Other attacks are less efficient; however, even less efficient attacks can open the door for DDoS attacks which are difficult to detect and block.

Attacks

Many of the attack vectors we found (and which were fixed today) are variants on a theme: a malicious client asks the server to do something which generates a response, but the client refuses to read the response. This exercises the server’s queue management code. Depending on how the server handles its queues, the client can force it to consume excess memory and CPU while processing its requests.

These are the attacks which are being disclosed today, all discovered by Jonathan Looney of Netflix, except for CVE-2019-9518 which was discovered by Piotr Sikora of Google:

  • CVE-2019-9511 “Data Dribble”: The attacker requests a large amount of data from a specified resource over multiple streams. They manipulate window size and stream priority to force the server to queue the data in 1-byte chunks. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service.
  • CVE-2019-9512 “Ping Flood”: The attacker sends continual pings to an HTTP/2 peer, causing the peer to build an internal queue of responses. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service.
  • CVE-2019-9513 “Resource Loop”: The attacker creates multiple request streams and continually shuffles the priority of the streams in a way that causes substantial churn to the priority tree. This can consume excess CPU, potentially leading to a denial of service.
  • CVE-2019-9514 “Reset Flood”: The attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both, potentially leading to a denial of service.
  • CVE-2019-9515 “Settings Flood”: The attacker sends a stream of SETTINGS frames to the peer. Since the RFC requires that the peer reply with one acknowledgement per SETTINGS frame, an empty SETTINGS frame is almost equivalent in behavior to a ping. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service.
  • CVE-2019-9516 “0-Length Headers Leak”: The attacker sends a stream of headers with a 0-length header name and 0-length header value, optionally Huffman encoded into 1-byte or greater headers. Some implementations allocate memory for these headers and keep the allocation alive until the session dies. This can consume excess memory, potentially leading to a denial of service.
  • CVE-2019-9517 “Internal Data Buffering”: The attacker opens the HTTP/2 window so the peer can send without constraint; however, they leave the TCP window closed so the peer cannot actually write (many of) the bytes on the wire. The attacker then sends a stream of requests for a large response object. Depending on how the servers queue the responses, this can consume excess memory, CPU, or both, potentially leading to a denial of service.
  • CVE-2019-9518 “Empty Frames Flood”: The attacker sends a stream of frames with an empty payload and without the end-of-stream flag. These frames can be DATA, HEADERS, CONTINUATION and/or PUSH_PROMISE. The peer spends time processing each frame disproportionate to attack bandwidth. This can consume excess CPU, potentially leading to a denial of service. (Discovered by Piotr Sikora of Google)

Workarounds and Fixes

In most cases, an immediate workaround is to disable HTTP/2 support. However, this may cause performance degradation, and it might not be possible in all cases. To obtain software fixes, please contact your software vendor. More information can also be found in the CERT/CC vulnerability note.

Hits: 676

chaos engineering

Chaos Engineering – Defining Stability !

PRINCIPLES OF CHAOS ENGINEERING

Chaos Engineering is the discipline of experimenting on a system
in order to build confidence in the system’s capability
to withstand turbulent conditions in production.

                    Advances in large-scale, distributed software systems are changing the game for software engineering.  As an industry, we are quick to adopt practices that increase flexibility of development and velocity of deployment.  An urgent question follows on the heels of these benefits: How much confidence we can have in the complex systems that we put into production?

                    Even when all of the individual services in a distributed system are functioning properly, the interactions between those services can cause unpredictable outcomes.  Unpredictable outcomes, compounded by rare but disruptive real-world events that affect production environments, make these distributed systems inherently chaotic.

                    We need to identify weaknesses before they manifest in system-wide, aberrant behaviors.  Systemic weaknesses could take the form of: improper fallback settings when a service is unavailable; retry storms from improperly tuned timeouts; outages when a downstream dependency receives too much traffic; cascading failures when a single point of failure crashes; etc.  We must address the most significant weaknesses proactively, before they affect our customers in production.  We need a way to manage the chaos inherent in these systems, take advantage of increasing flexibility and velocity, and have confidence in our production deployments despite the complexity that they represent.

                    An empirical, systems-based approach addresses the chaos in distributed systems at scale and builds confidence in the ability of those systems to withstand realistic conditions.  We learn about the behavior of a distributed system by observing it during a controlled experiment.  We call this Chaos Engineering.

CHAOS IN PRACTICE

                    To specifically address the uncertainty of distributed systems at scale, Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.  These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real world events like servers that crash, hard drives that malfunction, network connections that are severed, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group.

                    The harder it is to disrupt the steady state, the more confidence we have in the behavior of the system.  If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

ADVANCED PRINCIPLES

                    The following principles describe an ideal application of Chaos Engineering, applied to the processes of experimentation described above.  The degree to which these principles are pursued strongly correlates to the confidence we can have in a distributed system at scale.

Build a Hypothesis around Steady State Behavior

                    Focus on the measurable output of a system, rather than internal attributes of the system.  Measurements of that output over a short period of time constitute a proxy for the system’s steady state.  The overall system’s throughput, error rates, latency percentiles, etc. could all be metrics of interest representing steady state behavior.  By focusing on systemic behavior patterns during experiments, Chaos verifies that the system does work, rather than trying to validate how it works.

Vary Real-world Events

                    Chaos variables reflect real-world events.  Prioritize events either by potential impact or estimated frequency.  Consider events that correspond to hardware failures like servers dying,software failures like malformed responses, and non-failure events like a spike in traffic or a scaling event.  Any event capable of disrupting steady state is a potential variable in a Chaos experiment.

Run Experiments in Production

                    Systems behave differently depending on environment and traffic patterns.  Since the behavior of utilization can change at any time, sampling real traffic is the only way to reliably capture the request path.  To guarantee both authenticity of the way in which the system is exercised and relevance to the current deployed system, Chaos strongly prefers to experiment directly on production traffic.

Automate Experiments to Run Continuously

                    Running experiments manually is labor-intensive and ultimately unsustainable.  Automate experiments and run them continuously.  Chaos Engineering builds automation into the system to drive both orchestration and analysis.

Minimize Blast Radius

                    Experimenting in production has the potential to cause unnecessary customer pain. While there must be an allowance for some short-term negative impact, it is the responsibility and obligation of the Chaos Engineer to ensure the fallout from experiments are minimized and contained.

Chaos Engineering is a powerful practice that is already changing how software is designed and engineered at some of the largest-scale operations in the world.  Where other practices address velocity and flexibility, Chaos specifically tackles systemic uncertainty in these distributed systems.  The Principles of Chaos provide confidence to innovate quickly at massive scales and give customers the high quality experiences they deserve.

Hits: 353

How to Design an Application Architecture

Before you start designing an application architecture for any cloud, you need to start from a consideration of the main common quality attributes of the cloud:

  • Scalability is a capability to adjust a system capacity based on the current needs. For example, let’s say you’re developing an internet shop. You know that before Christmas, the number of orders will grow significantly and you need additional resources in order to handle every request. At the same time, during periods of normal operation, you don’t need as many resources.
  • Availability is the time when the system is functional and working.
  • Monitoring. In general, you need to think of the cloud as a remote data center. At the same time, you need to take into account that you may have a great number of servers and also consider the dynamic nature of the cloud.
  • Security is the capability of the system to prevent data loss, information leaks, and unauthorized usage.
  • Cost. You have unlimited access to the compute, storage and network resources and pay as you go, and any resources have their own cost. If you can’t use these resources wisely, you may pay a high cost.
  • Time-to-market is the time that is required to deliver your service to customers.

All these challenges are input for your architecture design. In your architecture, you need to provide answers on how you’re going to respond to these challenges. In order to do this, you need to apply architecture tactics that give you a vision on how you’re going to achieve defined system quality attributes.

Time-to-market

The key driver of architecture decisions, in most cases, is time-to-market. Business always dictates when the system needs to go live in order to archive an organization’s goals. To achieve time-to-market, you need to consider the following things:

  • Build versus buy. It is always attractive for software engineers to create something cool and brand new from scratch. However, it can require a significant amount of effort, and there is no guarantee that your solution will be successful. Before you start a design solution, you need to check if something ready to use is available on the market. If there is an option that works for you, always choose a ready-to-use solution. This decision can significantly reduce implementation efforts and often solution cost.

Here’s a simple example: your solution requires a MySQL database. In this case, you may definitely deploy a cloud solution. Let’s assume that it is AWS. However, if your own MySQL server is on Infrastructure as a Service (IaaS), you need to spend time on deployment and configuration scripts. You need to worry about server availability, security, backups and disaster recovery. You may easily spin up a new RDS DB instance with a MySQL engine. Yes, it will be preconfigured by Amazon and may have some limitations, but you spend five minutes to deploy a MySQL server, instead of several days to develop deployment scripts.

  • Development environment. If your development team can’t use an environment that’s pretty close to the production configuration, it is a road to a great number of defects, low development velocity and an unpredictable delivery date. When you design a solution, you need to choose a technology stack and services that each developer in your team may use to work on tasks. Developers need to have isolated environments to work independently. As a fast solution, you may give unlimited access to the cloud to the development team. This approach works fine, however there is a negative side—cloud resources are expensive. The infrastructure as code approach is the best option.

Your development team may use Vagrant or Docker in order to have the ability to run the system locally on the workstation. You need to carefully choose your system components, which can be run locally or used in the cloud only. For example, if we’re talking about a database, you have options to choose between DynamoDB and MySQL (for instance). DynamoDB is great. However, you can’t deploy it locally and you need to think about how to create in isolated development environments. MySQL is pretty well known by software engineers, and it can be deployed locally. Which is the best? There is no right answer to this question because you need to choose the one that is best for you and your team, taking into account your goals, team expertise, etc. 

Scalability

There are two ways to manage your system capacity:

  • You may add more CPU, memory and storage to a server—vertical scalability.
  • You may add more servers if you need more processing power or reduce the number of servers when you don’t have a need to keep a full fleet of servers. This approach is called horizontal scalability.

In order to achieve scalability and elasticity of your infrastructure, follow these recommendations:

  • Don’t think about servers as static resources. Servers in the cloud can be easily terminated, replaced or added. It is often too expensive to recover a server in the cloud. If you have the right design and implementation, you can terminate a server and replace it with a new one. In order to do this, you have the following options:
    • Golden images. You may create server images that have an installation of the specific version of your system. They can be virtual machine or Docker images.
    • Bootstrap scripts. When a new virtual machine is starting, it runs a bootstrap script that installs all required configuration on the machine. For this approach, you may use a Chef, Puppet, Ansible or even bash scripts.
  • If you want to archive horizontal scalability, you need to design stateless components.
  • If for some reason you have a component that has to keep a state, use a vertical scalability approach. A good example of a component that keeps state and can be vertically scaled is a MySQL database server.
  • Vertical scalability may have limits too. For example, you may give your application a server with multiple CPU and several gigabytes of RAM, however your application needs to be able to consume all of these resources.
  • Design a system with loose coupling components and use messaging for communication between components.

Availability

The general strategy for an architecture design for the cloud is to design for failure. You need to consider that cloud services and third-party services can sometimes be unavailable. The cloud provider may terminate your service in order to move it on another server rack without any notification in case of a hardware failure. General tactics to achieve high availability are:

  • Reduce single points of failure. You need to assume a reductant number of servers.
  • Distribute your servers and services between different geographic locations.
  • Use messaging for communication between components. It can guarantee that you will not lose a message when a component is unavailable.

Monitoring

If you have just one server or even a few servers, you may get system metrics. Check an application status or check logs. However, what do you do if you have hundreds of servers and the servers can be added or replaced at any point in time?

  • Choose a system monitoring solution. Oftentimes, cloud providers supply some basic functionality of the box. However, if it is not enough, you may choose between different monitoring solutions available on the market.
  • Aggregate logs and store on reliable storage. To perform troubleshooting and incidents investigation in the elastic environment, you need to publish logs on a remote server and have the ability to perform a log search. If you want to build your own log aggregation solution, Elastic is the de facto standard. you can chose to deploy the native ELK stack or opt for Graylog If you prefer to use SaaS, you may choose between Sumo Logic, Loggly or any other service available on the market.
  • Implement health checks endpoints for each system component. The health check endpoint should return a status of OK or Failure. If the service uses external dependencies like a database or remote services, it is a good idea to show a dependency status.

Security

Security is always important. First, you need to decide how you’re going to manage access to the cloud. Each cloud provider supplies their own solution for Identity and Account Management (IAM). The solution gives you the ability to configure granular access to cloud resources. Then, you may apply the following recommendations to your solution:

  • Encrypt communication between service components.
  • Encrypt all sensitive information.
  • Grant the minimum required access level to the cloud services for your application.
  • Design a solution to rotate encryption keys and credentials.
  • Use virtual private networks and expose only public endpoints to the internet.

Cost

  • Always check cloud pricing and available discounts to achieve cost optimization.
  • Horizontal scalability is a powerful tool for cost optimization. However, it is not a silver bullet. In some cases, it makes sense to choose a bigger server. For example, you might do this if you get a discount if you pay in advance for larger server. You also may have a hybrid solution, when you have one big server that handles an average load and adds a smaller one if you need to increase system capacity.
  • Shut down unused resources. If you know that a minimum number of users access the system at night, you may shut down and spin up servers by schedule or any other metric, available in the cloud.

Benchmark a cloud platform(s)

After you’ve defined the quality attributes of your future or legacy system, next step is to choose a cloud provider that is the best for your needs. Yes, you should test a cloud provider and check if they can meet the quality attributes that you defined. A couple of standard areas you need to consider are:

  • Presence in different regions. It is always a good idea to deploy your infrastructure as close to consumers as possible. Another point, that if you worry about high-availability of your service, you definitely need to deploy the infrastructure on the multiple sites.
  • Check if they have accelerators to help you deploy and deliver your solution faster. It can be a platform as service, managed services, data migration services, and even virtual machines of the right specification. If you can use a managed service provided out of the box, you can save on the operation’s efforts.
  • Network performance. If your solution requires a specific networking performance, always perform a benchmark of the cloud network. Each cloud provider has their own vision on what network performance and quality is the best for their target audience.

Use DevOps approach and implement an infrastructure as code

In a traditional data center, in most cases, you work with a fixed amount of servers. So whenever you need to add a new machine or decommission an existing one, it is a very expensive and tough process. On the other hand, the server already exists, and you don’t need to worry about deploying it from scratch. In the cloud world, service is a compute unit that can be easily deployed or terminated at any given moment of time. Conducting manual operations in the cloud-like server configuration, deployment, and so on is a way to disaster. At the same time, development process on cloud is very expensive, and if you need to development an environment that is different from the target environment, you may face some challenges during the deployment. Challenges are not fun, so you’d better start the whole process with building the right development environment for development and operational teams at the initial stages. You can evaluate a current state of the project with the help of the project maturity model. By the way, all team members need access and possibility to run a copy of the development environment locally or on the cloud. In case anything gets broken, an engineer should be able to rebuild the environment from scratch.

To achieve these goals consider the following tools:

  • Vagrant is, in fact, an abstraction on a different virtualization and cloud platform. You can describe your server or even cluster configuration in the code. Vagrant enables you to use different provisioning tools, such as plain script, chef, puppet, ansible, etc. To bootstrap a virtual machine. You can use vagrant as a playground for cluster configuration testing, as well as for testing of the application deployment. This way, software engineers can test an application in the environment that is pretty close to the target configuration.
  • Using provisioning tools (chef, puppet, ansible or even plain bash script) is much better than a manual deployment;
  • Docker is another great tool for the infrastructure as code, especially if you’re going to use containerization on the cloud. With the help of Docker and Docker Compose you can emulate the target infrastructure and deploy on cloud only tested container images.

Conclusion

As a summary, I’d like to stress that the overall approach to architecture design for the cloud is the same as to any other software. As usual, you need to start with defining the target quality attributes that describe your expectations, then apply one of the tactics mentioned above. The difference between designing for the cloud and for on-premises is that in the cloud infrastructure is also software. It means that you need to design your cloud solution as if software and infrastructure are one single piece of the whole. To do this, you will need to find tradeoffs between software design and infrastructure. However, every application and business case is unique so the tradeoff will have to be customized and carefully considered to find some harmony between the target quality attributes, design, and business needs.

Hits: 274

Securing NGINX Web Server

#1: Turn On SELinux

Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.

Do Boolean Lockdown

Run the getsebool -a command and lockdown system:

getsebool -a | less
getsebool -a | grep off
getsebool -a | grep on

To secure the machine, look at settings which are set to ‘on’ and change to ‘off’ if they do not apply to your setup with the help of setsebool command. Set correct SE Linux booleans to maintain functionality and protection. Please note that SELinux adds 2-8% overheads to typical RHEL or CentOS installation.

#2: Allow Minimal Privileges Via Mount Options

Server all your webpages / html / php files via separate partitions. For example, create a partition called /dev/sda5 and mount at the /nginx. Make sure /nginx is mounted with noexec, nodev and nosetuid permissions. Here is my /etc/fstab entry for mounting /nginx:

LABEL=/nginx     /nginx          ext3   defaults,nosuid,noexec,nodev 1 2

Note you need to create a new partition using fdisk and mkfs.ext3 commands.

#3: Linux /etc/sysctl.conf Hardening

You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1
 
# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1
 
# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1
 
# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
 
# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
 
# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
 
# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
 
# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
 
 
# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1
 
# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1
 
# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535
 
# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536
 
# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000
 
# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
 
# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1

See also:

#4: Remove All Unwanted Nginx Modules

You need to minimizes the number of modules that are compiled directly into the nginx binary. This minimizes risk by limiting the capabilities allowed by the webserver. You can configure and install nginx using only required modules. For example, disable SSI and autoindex module you can type:
# ./configure --without-http_autoindex_module --without-http_ssi_module
# make
# make install

Type the following command to see which modules can be turn on or off while compiling nginx server:
# ./configure --help | less
Disable nginx modules that you don’t need.

(Optional) Change Nginx Version Header

Edit src/http/ngx_http_header_filter_module.c, enter:
# vi +48 src/http/ngx_http_header_filter_module.c
Find line

static char ngx_http_server_string[] = "Server: nginx" CRLF;
static char ngx_http_server_full_string[] = "Server: " NGINX_VER CRLF;

Change them as follows:

static char ngx_http_server_string[] = "Server: Ninja Web Server" CRLF;
static char ngx_http_server_full_string[] = "Server: Ninja Web Server" CRLF;

Save and close the file. Now, you can compile the server. Add the following in nginx.conf to turn off nginx version number displayed on all auto generated error pages:

server_tokens off

#5: Use mod_security (only for backend Apache servers)

mod_security provides an application level firewall for Apache. Install modSecurity for all backend Apache web servers. This will stop many injection attacks.

#6: Install SELinux Policy To Harden The Nginx Webserver

By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:
# yum -y install selinux-policy-targeted selinux-policy-devel
Download targeted SELinux policies to harden the nginx webserver on Linux servers from the project home page:
# cd /opt
# wget 'http://downloads.sourceforge.net/project/selinuxnginx/se-ngix_1_0_10.tar.gz?use_mirror=nchc'

Untar the same:
# tar -zxvf se-ngix_1_0_10.tar.gz
Compile the same
# cd se-ngix_1_0_10/nginx
# make

Sample outputs:

Compiling targeted nginx module
/usr/bin/checkmodule:  loading policy configuration from tmp/nginx.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 6) to tmp/nginx.mod
Creating targeted nginx.pp policy package
rm tmp/nginx.mod.fc tmp/nginx.mod

Install the resulting nginx.pp SELinux module:
# /usr/sbin/semodule -i nginx.pp

#7: Restrictive Iptables Based Firewall

The following firewall script blocks everything and only allows:

  • Incoming HTTP (TCP port 80) requests
  • Incoming ICMP ping requests
  • Outgoing ntp (port 123) requests
  • Outgoing smtp (TCP port 25) requests
#!/bin/bash
IPT="/sbin/iptables"
 
#### IPS ######
# Get server public ip
SERVER_IP=$(ifconfig eth0 | grep 'inet addr:' | awk -F'inet addr:' '{ print $2}' | awk '{ print $1}')
LB1_IP="204.54.1.1"
LB2_IP="204.54.1.2"
 
# Do some smart logic so that we can use damm script on LB2 too
OTHER_LB=""
SERVER_IP=""
[[ "$SERVER_IP" == "$LB1_IP" ]] && OTHER_LB="$LB2_IP" || OTHER_LB="$LB1_IP"
[[ "$OTHER_LB" == "$LB2_IP" ]] && OPP_LB="$LB1_IP" || OPP_LB="$LB2_IP"
 
### IPs ###
PUB_SSH_ONLY="122.xx.yy.zz/29"
 
#### FILES #####
BLOCKED_IP_TDB=/root/.fw/blocked.ip.txt
SPOOFIP="127.0.0.0/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16 0.0.0.0/8 240.0.0.0/4 255.255.255.255/32 168.254.0.0/16 224.0.0.0/4 240.0.0.0/5 248.0.0.0/5 192.0.2.0/24"
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v "^#|^$" ${BLOCKED_IP_TDB})
 
### Interfaces ###
PUB_IF="eth0"   # public interface
LO_IF="lo"      # loopback
VPN_IF="eth1"   # vpn / private net
 
### start firewall ###
echo "Setting LB1 $(hostname) Firewall..."
 
# DROP and close everything
$IPT -P INPUT DROP
$IPT -P OUTPUT DROP
$IPT -P FORWARD DROP
 
# Unlimited lo access
$IPT -A INPUT -i ${LO_IF} -j ACCEPT
$IPT -A OUTPUT -o ${LO_IF} -j ACCEPT
 
# Unlimited vpn / pnet access
$IPT -A INPUT -i ${VPN_IF} -j ACCEPT
$IPT -A OUTPUT -o ${VPN_IF} -j ACCEPT
 
# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! --syn -m state --state NEW -j DROP
 
# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL FIN,URG,PSH -j DROP
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL ALL -j DROP
 
# Drop NULL packets
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " NULL Packets "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
 
# Drop XMAS
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " XMAS Packets "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
 
# Drop FIN packet scans
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " Fin Packets Scan "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP
 
# Log and get rid of broadcast / multicast and invalid
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j LOG --log-prefix " Broadcast "
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j LOG --log-prefix " Multicast "
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -m state --state INVALID -j LOG --log-prefix " Invalid "
$IPT  -A INPUT -i ${PUB_IF} -m state --state INVALID -j DROP
 
# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
do
         $IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG --log-prefix " SPOOF List Block "
         $IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
done
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist
 
# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
do
        $IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} --destination-port 22 -j ACCEPT
        $IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} --sport 22 -j ACCEPT
done
 
# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -m limit --limit 30/sec  -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp --icmp-type 0 -d 0/0 -m state --state ESTABLISHED,RELATED -j ACCEPT
 
# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --sport 80 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
 
 
# allow outgoing ntp
$IPT -A OUTPUT -o ${PUB_IF} -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT
 
# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
 
### add your other rules here ####
 
#######################
# drop and log everything else
$IPT -A INPUT -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " DEFAULT DROP "
$IPT -A INPUT -j DROP
 
exit 0

See the following tutorials for more info:

  1. How to setup a UFW firewall on Ubuntu 18.04 LTS server
  2. CentOS / Redhat Iptables Firewall Configuration Tutorial
  3. Linux: 20 Iptables Examples For New SysAdmins

#8: Controlling Buffer Overflow Attacks

Edit nginx.conf and set the buffer size limitations for all clients.
# vi /usr/local/nginx/conf/nginx.conf
Edit and set the buffer size limitations for all clients as follows:

 ## Start: Size Limits & Buffer Overflows ##
  client_body_buffer_size  1K;
  client_header_buffer_size 1k;
  client_max_body_size 1k;
  large_client_header_buffers 2 1k;
 ## END: Size Limits & Buffer Overflows ##

Where,

  1. client_body_buffer_size 1k – (default is 8k or 16k) The directive specifies the client request body buffer size.
  2. client_header_buffer_size 1k – Directive sets the headerbuffer size for the request header from client. For the overwhelming majority of requests a buffer size of 1K is sufficient. Increase this if you have a custom header or a large cookie sent from the client (e.g., wap client).
  3. client_max_body_size 1k– Directive assigns the maximum accepted body size of client request, indicated by the line Content-Length in the header of request. If size is greater the given one, then the client gets the error “Request Entity Too Large” (413). Increase this when you are getting file uploads via the POST method.
  4. large_client_header_buffers 2 1k – Directive assigns the maximum number and size of buffers for large headers to read from client request. By default the size of one buffer is equal to the size of page, depending on platform this either 4K or 8K, if at the end of working request connection converts to state keep-alive, then these buffers are freed. 2x1k will accept 2kB data URI. This will also help combat bad bots and DoS attacks.

You also need to control timeouts to improve server performance and cut clients. Edit it as follows:

 ## Start: Timeouts ##
  client_body_timeout   10;
  client_header_timeout 10;
  keepalive_timeout     5 5;
  send_timeout          10;
## End: Timeouts ##
  1. client_body_timeout 10; – Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408). The default is 60.
  2. client_header_timeout 10; – Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
  3. keepalive_timeout 5 5; – The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
  4. send_timeout 10; – Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.

#9: Control Simultaneous Connections

You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:

### Directive describes the zone, in which the session states are stored i.e. store in slimits. ###
### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ###
       limit_zone slimits $binary_remote_addr 5m;
 
### Control maximum number of simultaneous connections for one session i.e. ###
### restricts the amount of connections from a single ip address ###
        limit_conn slimits 5;

The above will limits remote clients to no more than 5 concurrently “open” connections per remote ip address.

#10: Allow Access To Our Domain Only

If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don’t want to display request using an IP address:

## Only requests to our Host are allowed i.e. nixcraft.in, images.nixcraft.in and www.nixcraft.in
      if ($host !~ ^(nixcraft.in|www.nixcraft.in|images.nixcraft.in)$ ) {
         return 444;
      }
##

#11: Limit Available Methods

GET and POST are the most common methods on the Internet. Web server methods are defined in RFC 2616. If a web server does not require the implementation of all available methods, they should be disabled. The following will filter and only allow GET, HEAD and POST methods:

## Only allow these request methods ##
     if ($request_method !~ ^(GET|HEAD|POST)$ ) {
         return 444;
     }
## Do not accept DELETE, SEARCH and other methods ##

More About HTTP Methods

  • The GET method is used to request document such as https://amisafe.secops.in/index.php.
  • The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
  • The POST method may involve anything, like storing or updating data, or ordering a product, or sending E-mail by submitting the form. This is usually processed using the server side scripting such as PHP, PERL, Python and so on. You must use this if you want to upload files and process forms on server.

#12: How Do I Deny Certain User-Agents?

You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.

## Block download agents ##
     if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
            return 403;
     }
##

Block robots called msnbot and scrapbot:

## Block some robots ##
     if ($http_user_agent ~* msnbot|scrapbot) {
            return 403;
     }

#12: How Do I Block Referral Spam?

Referer spam is dangerous. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.

## Deny certain Referers ###
     if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
     {
         # return 404;
         return 403;
     }
##

#13: How Do I Stop Image Hotlinking?

Image or HTML hot-linking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker’s site. This is usually done on forums and blogs. I strongly suggest you block and stop image hot-linking at your server level itself.

# Stop deep linking or hot linking
location /images/ {
  valid_referers none blocked www.example.com example.com;
   if ($invalid_referer) {
     return   403;
   }
}

Example: Rewrite And Display Image

Another example with link to banned image:

valid_referers blocked www.example.com example.com;
 if ($invalid_referer) {
  rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
 }

See also:

  • HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.

#14: Directory Restrictions

You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.

Limiting Access By Ip Address

You can limit access to directory by ip address to /docs/ directory:

location /docs/ {
  ## block one workstation
  deny    192.168.1.1;

  ## allow anyone in 192.168.1.0/24
  allow   192.168.1.0/24;

  ## drop rest of the world
  deny    all;
}

Password Protect The Directory

First create the password file and add a user called vivek:
# mkdir /usr/local/nginx/conf/.htpasswd/
# htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd vivek

Edit nginx.conf and protect the required directories as follows:

### Password Protect /personal-images/ and /delta/ directories ###
location ~ /(personal-images/.*|delta/.*) {
  auth_basic  "Restricted";
  auth_basic_user_file   /usr/local/nginx/conf/.htpasswd/passwd;
}

Once a password file has been generated, subsequent users can be added with the following command:
# htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

#15: Nginx SSL Configuration

HTTP is a plain text protocol and it is open to passive monitoring. You should use SSL to to encrypt your content for users:

  1. HowTo: Create a Self-Signed SSL Certificate on Nginx For CentOS / RHEL
  2. How to configure Nginx with free Let’s Encrypt SSL certificate on Debian or Ubuntu Linux
  3. How to install Letsencrypt free SSL/TLS for Nginx certificate on Alpine Linux
  4. How to configure Nginx SSL/TLS passthrough with TCP load balancing
  5. nginx: Setup SSL Reverse Proxy (Load Balanced SSL Proxy)

#16: Nginx And PHP Security Tips

PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:

# Disallow dangerous functions
disable_functions = phpinfo, system, mail, exec
 
## Try to limit resources  ##
 
# Maximum execution time of each script, in seconds
max_execution_time = 30
 
# Maximum amount of time each script may spend parsing request data
max_input_time = 60
 
# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M
 
# Maximum size of POST data that PHP will accept.
post_max_size = 8M
 
# Whether to allow HTTP file uploads.
file_uploads = Off
 
# Maximum allowed size for uploaded files.
upload_max_filesize = 2M
 
# Do not expose PHP error messages to external users
display_errors = Off
 
# Turn on safe mode
safe_mode = On
 
# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path
 
# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_
 
# Restrict PHP information leakage
expose_php = Off
 
# Log all errors
log_errors = On
 
# Do not register globals for input data
register_globals = Off
 
# Minimize allowable PHP post size
post_max_size = 1K
 
# Ensure PHP redirects appropriately
cgi.force_redirect = 0
 
# Disallow uploading unless necessary
file_uploads = Off
 
# Enable SQL safe mode
sql.safe_mode = On
 
# Avoid Opening remote files
allow_url_fopen = Off

See also:

#17: Run Nginx In A Chroot Jail (Containers) If Possible

Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, Linux containers on a Debian/Ubuntu, LXD on a Fedora, or OpenVZ virtualization which uses the concept of containers.

#18: Limits Connections Per IP At The Firewall Level

A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.

Linux Iptables: Throttle Nginx Connections Per Second

The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:

/sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
/sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60  --hitcount 15 -j DROP
service iptables save

BSD PF: Throttle Nginx Connections Per Second

Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.

webserver_ip="202.54.1.1"
table <abusive_ips> persist
block in quick from <abusive_ips>
pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload <abusive_ips> flush)

Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:

  1. Sample PF firewall script.
  2. Sample Iptables firewall script.

#19: Configure Operating System to Protect Web Server

Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:
# find /nginx -user nginx
# find /usr/local/nginx/html -user nginx

Make sure you change file ownership to root or other user. A typical set of permission /usr/local/nginx/html/
# ls -l /usr/local/nginx/html/
Sample outputs:

-rw-r--r-- 1 root root 925 Jan  3 00:50 error4xx.html
-rw-r--r-- 1 root root  52 Jan  3 10:00 error5xx.html
-rw-r--r-- 1 root root 134 Jan  3 00:52 index.html

You must delete unwated backup files created by vi or other text editor:
# find /nginx -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'
# find /usr/local/nginx/html/ -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'

Pass -delete option to find command and it will get rid of those files too.

#20: Restrict Outgoing Nginx Connections

The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):

/sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner vivek -p tcp --dport 80 -m state --state NEW,ESTABLISHED  -j ACCEPT

Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.

#21: Keep your software up to date

You must keep your software and kernel up to date all time. Apply patch as per your version or distro. If you are using a Debian/Ubuntu Linux use apt-get command/apt command to apply patches:
$ sudo apt-get update
$ sudo apt-get upgrade

If you are using a RHEL/CentOS/Oracle/Scientific Linux, use yum command:
$ sudo yum update
If you are using an Alpine Linux use apk command:
# apk update
# apk upgrade

#22 : Avoid clickjacking

Add the following in your nginx.conf or virtual domain to avoid clickjacking:
add_header X-Frame-Options SAMEORIGIN;

#23 : Disable content-type sniffing on some browsers

Add the following in your nginx.conf or virtual domain:
add_header X-Content-Type-Options nosniff;

#23 : Enable the Cross-site scripting (XSS) filter

Add the following in your nginx.conf or virtual domain:
add_header X-XSS-Protection "1; mode=block";

#24: Force HTTPS

There is no reason to use HTTP. Force everyone to use HTTPS by deault:

  1. How To: Nginx Redirect All HTTP Request To HTTPS Rewrite 301 Rules
  2. How to redirect non-www to www HTTP / TLS /SSL traffic on Nginx

#25: Monitoring nginx

The Stub Status allows to see total number of client requests and other info. See how to enable it for more info:

  1. nginx: See Active connections / Connections Per Seconds
  2. HowTo: Enable Nginx Status Page

Bounce Tip: Watching Your Logs & Auditing

Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
# grep "/login.php??" /usr/local/nginx/logs/access_log
# grep "...etc/passwd" /usr/local/nginx/logs/access_log
# egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log

The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authentication events, file modifications, account modification and so on. As usual disable all services and follow our “Linux Server Hardening” security tips.

Conclusion

Your nginx server is now properly hardened and is ready to server webpages. However, you should be consulted further resources for your web applications security needs. For example, wordpress or any other third party apps has its own security requirements.

Hits: 146