Go Live Checklist for APP on Kubernetes

Running applications in production can be tricky. This post proposes an opinionated checklist for going to production with a web service (i.e. application exposing HTTP API) on Kubernetes.

General

  • Application’s name, description, purpose, and owning team is clearly documented (e.g. in a central application registry or wiki)
  • Application’s criticality level was defined (e.g. “tier 1” if the app is highly critical for the business)
  • Development team has sufficient knowledge/experience with the technology stack
  • Responsible 24/7 on-call team is identified and informed
  • Go-Live plan exists incl. steps for potential rollback

Application

  • Application’s code repository (git) has clear instructions on how to develop, how to configure, and how to contribute changes (important for emergency fixes)
  • Code dependencies are pinned (i.e. hotfix changes do not accidentally pull in new libraries)
  • All relevant code is instrumented with OpenTracing or OpenTelemetry
  • OpenTracing/OpenTelemetry semantic conventions are followed (incl. additional company conventions)
  • All outgoing HTTP calls have a defined timeout
  • HTTP connection pools are configured with sane values according to expected traffic
  • Thread pools and/or non-blocking async code is correctly implemented/configured
  • Database connection pools are sized correctly
  • Retries and retry policies (e.g. backoff with jitter) are implemented for dependent services
  • Circuit breakers are implemented
  • Fallbacks for circuit breakers are defined according to business requirements
  • Load shedding / rate limiting mechanisms are implemented (could be part of provided infrastructure)
  • Application metrics are exposed for collection (e.g. to be scraped by Prometheus)
  • Application logs go to stdout/stderr
  • Application logs follow good practices (e.g. structured logging, meaningful messages), log levels are clearly defined, and debug logging is disabled for production by default (with option to turn on)
  • Application container crashes on fatal errors (i.e. it does not enter some unrecoverable state or deadlock)
  • Application design/code was reviewed by a senior/principal engineer

Security & Compliance

  • Application can run as unprivileged user (non-root) / Use an immutable Operating System like CoreOS on Worker Nodes
  • Application does not require a writable container filesystem (i.e. can be mounted read-only)
  • HTTP requests are authenticated and authorized (e.g. using OAuth)
  • Mechanisms to mitigate Denial Of Service (DOS) attacks are in place (e.g. ingress rate limiting, WAF)
  • A security audit was conducted
  • Automated vulnerability checks for code / dependencies are in place
  • Processed data is understood, classified (e.g. PII), and documented
  • Threat model was created and risks are documented
  • Other applicable organizational rules and compliance standards are followed

CI/CD

  • Automated code linting is run on every change
  • Automated tests are part of the delivery pipeline
  • No manual operations are needed for production deployments
  • All relevant team members can deploy and rollback
  • Production deployments have smoke tests and optionally automatic rollbacks
  • Lead time from code commit to production is fast (e.g. 15 minutes or less including test runs)

Kubernetes

  • Development team is trained in Kubernetes topics and knows relevant concepts
  • Kubernetes manifests use the latest API version (e.g. apps/v1 for Deployment)
  • Container runs as non-root and uses a read-only filesystem
  • A proper Readiness Probe was defined (see blog post about Readiness/Liveness Probes )
  • No Liveness Probe is used, or there is a clear rationale to use a Liveness Probe (see blog post about Readiness/Liveness Probes )
  • Kubernetes deployment has at least two replicas
  • A Pod Disruption Budget was defined (or is automatically created, e.g. by pdb-controller)
  • Horizontal autoscaling (HPA) is configured if adequate
  • Memory and CPU requests are set according to performance/load tests
  • Memory limit equals memory requests (to avoid memory overcommit)
  • CPU limits are not set or impact of CPU throttling is well understood
  • Application is correctly configured for the container environment (e.g. JVM heap, single-threaded runtimes, runtimes not container-aware)
  • Single application process runs per container
  • Application can handle graceful shutdown and rolling updates without disruptions (see this blog post )
  • Pod Lifecycle Hook (e.g. “sleep 20” in preStop) is used if the application does not handle graceful termination
  • All required Pod labels are set (e.g. “application”, “component”, “environment”)
  • Application is set up for high availability: pods are spread across failure domains (AZs, default behavior for cross-AZ clusters) and/or application is deployed to multiple clusters
  • Kubernetes Service uses the right label selector for pods (e.g. not only matches the “application” label, but also “component” and “environment” for future extensibility)
  • There are no anti-affinity rules defined, unless really required (pods are spread across failure domains by default)
  • Optional: Tolerations are used as needed (e.g. to bind pods to a specific node pool)

See also this curated checklist of Kubernetes production best practices.

Monitoring

  • Metrics for The Four Golden Signals are collected
  • Application metrics are collected (e.g. via Prometheus scraping)
  • Backing data store (e.g. PostgreSQL database) is monitored
  • SLOs are defined
  • Monitoring dashboards (e.g. Grafana) exist (could be automatically set up)
  • Alerting rules are defined based on impact, not potential causes

Testing

  • Breaking points were tested (system/chaos test)
  • Load test was performed which reflects the expected traffic pattern
  • Backup and restore of the data store (e.g. PostgreSQL database) was tested

24/7 On-Call

  • All relevant 24/7 personnel is informed about the go-live (e.g. other teams, SREs, or other roles like incident commanders)
  • 24/7 on-call team has sufficient knowledge about the application and business context
  • 24/7 on-call team has necessary production access (e.g. kubectl, kube-web-view, application logs)
  • 24/7 on-call team has expertise to troubleshoot production issues with the tech stack (e.g. JVM)
  • 24/7 on-call team is trained and confident to perform standard operations (scale up, rollback, ..)
  • Runbooks are defined for application-specific incident handling
  • Runbooks for overload scenarios have pre-approved business decisions (e.g. what customer feature to disable to reduce load)
  • Monitoring alerts to page the 24/7 on-call team are set up
  • Automatic escalation rules are in place (e.g. page next level after 10 minutes without acknowledgement)
  • Process for conducting postmortems and disseminating incident learnings exists
  • Regular application/operational reviews are conducted (e.g. looking at SLO breaches)

Hits: 391

Review your Infrastructure Architecture Today !

The following review checklists provide a wide range of typical questions that may be used in conducting Architecture Compliance reviews, relating to various aspects of the architecture. The organization of the questions includes the basic disciplines of system engineering, information management, security, and systems management. The checklists are based on material provided by a member of The Open Group, and are specific to that organization. Other organizations could use the following checklists with other questions tailored to their own particular needs.

The checklists provided contain too many questions for any single review: they are intended to be tailored selectively to the project concerned (see 48.6 Architecture Compliance Review Guidelines). The checklists actually used will typically be developed/selected by subject matter experts. They are intended to be updated annually by interest groups in those areas.

Some of the checklists include a brief description of the architectural principle that provokes the question, and a brief description of what to look for in the answer. These extensions to the checklist are intended to allow the intelligent re-phrasing of the questions, and to give the user of the checklist a feel for why the question is being asked.

Occasionally the questions will be written, as in RFPs, or in working with a senior project architect. More typically they are expressed orally, as part of an interview or working session with the project.

The checklists provided here are designed for use in individual architecture projects, not for business domain architecture or for architecture across multiple projects. (Doing an architecture review for a larger sphere of activity, across multiple business processes and system projects, would involve a similar process, but the checklist categories and their contents would be different.)

Detailed Checklist

Hardware and Operating System Checklist

  • What is the project’s life cycle approach?
  • At what stage is the project in its life cycle?
  • What key issues have been identified or analyzed that the project believes will drive evaluations of hardware and operating systems for networks, servers, and end-user devices?
  • What system capabilities will involve high-volume and/or high-frequency data transfers?
  • How does the system design impact or involve end-user devices?
  • What is the quantity and distribution (regional and global) of usage, data storage, and processing?
  • What applications are affinitized with your project by similarities in data, application services, etc.? To what degree is data affinitized with your project?
  • What hardware and operating system choices have been made before functional design of key elements of the system?
  • If hardware and operating system decisions were made outside of the project’s control:
    • What awareness does the project have of the rationale for those decisions?
    • How can the project influence those decisions as system design takes shape?
  • If some non-standards have been chosen:
    • What are the essential business and technical requirements for not using corporate standards?
    • Is this supported by a business case?
    • Have the assumptions in the business case been subject to scrutiny?
  • What is your process for evaluating full life cycle costs of hardware and operating systems?
  • How has corporate financial management been engaged in evaluation of life cycle costs?
  • Have you performed a financial analysis of the supplier?
  • Have you made commitments to any supplier?
  • Do you believe your requirements can be met by only one supplier?

Software Services and Middleware Checklist

  • Describe how error conditions are defined, raised, and propagated between application components.
  • Describe the general pattern of how methods are defined and arranged in various application modules.
  • Describe the general pattern for how method parameters are defined and organized in various application modules. Are [in], [in/out], [out] parameters always specified in the same order? Do Boolean values returned by modules have a consistent outcome?
  • Describe the approach that is used to minimize the number of round-trips between client and server calls, particularly for out-of-process calls, and when complex data structures are involved.
  • Describe the major data structures that are passed between major system components.
  • Describe the major communication protocols that are used between major system components.
  • Describe the marshaling techniques that are used between various system components. Describe any specialized marshaling arrangements that are used.
  • Describe to what extent the system is designed with stateful and stateless components.
  • Describe how and when state is saved for both stateful and stateless components.
  • Describe the extent to which objects are created, used, and destroyed versus re-used through object pooling.
  • Describe the extent to which the system relies on threading or critical section coding.
  • Describe the approach and the internal documentation that is used internally in the system to document the methods, methods arguments, and method functionality.
  • Describe the code review process that was used to build the system.
  • Describe the unit testing that has been used to test the system components.
  • Describe the pre- and post-condition testing that is included in various system modules.
  • Describe the assertion testing that is included with the system.
  • Do components support all the interface types they need to support or are certain assumptions made about what types of components will call other components either in terms of language bindings or other forms of marshaling?
  • Describe the extent to which big-endian or little-endian data format problems need to be handled across different platforms.
  • Describe if numbers or strings need to be handled differently across different platforms.
  • Describe whether the software needs to check for floating-point round-off errors.
  • Describe how time and date functions manage dates so as to avoid improper handling of time and date calculation or display.
  • Describe what tools or processes have been used to test the system for memory leaks, reachability, or general robustness.
  • Describe the layering of the systems services software. Describe the general number of links between major system components. Is the system composed of a lot of point-to-point interfaces or are major messaging backbones used instead?
  • Describe to what extent the system components are either loosely coupled or tightly coupled.
  • What requirements does the system need from the infrastructure in terms of shared libraries, support for communication protocols, load balancing, transaction processing, system monitoring, naming services, or other infrastructure services?
  • Describe how the system and system components are designed for refactoring.
  • Describe how the system or system components rely on common messaging infrastructure versus a unique point-to-point communication structure.

Applications Checklists

Infrastructure (Enterprise Productivity) Applications

  • Is there need for capabilities that are not provided through the enterprise’s standard infrastructure application products? For example:
    • Collaboration
      1. Application sharing
      2. Video conferencing
      3. Calendaring
      4. Email
    • Workflow management
    • Publishing/word processing applications
      1. HTML
      2. SGML and XML
      3. Portable document format
      4. Document processing (proprietary format)
      5. Desktop publishing
    • Spreadsheet applications
    • Presentation applications
      1. Business presentations
      2. Image
      3. Animation
      4. Video
      5. Sound
      6. CBT
      7. Web browsers
    • Data management applications
      1. Database interface
      2. Document management
      3. Product data management
      4. Data warehouses/mart
    • Program management applications
      1. Project management
      2. Program visibility
  • Describe the business requirements for enterprise infrastructure application capabilities that are not met by the standard products.

Business Applications

  • Are any of the capabilities required provided by standard products supporting one or more line-of-business applications? For example:
    • Business acquisition applications
      1. Sales and marketing
    • Engineering applications
      1. Computer-aided design
      2. Computer-aided engineering
      3. Mathematical and statistics analysis
    • Supplier management applications
      1. Supply chain management
      2. Customer relationship management
    • Manufacturing applications
      1. Enterprise Resource Planning (ERP) applications
      2. Manufacturing execution systems
      3. Manufacturing quality
      4. Manufacturing process engineering
      5. Machine and adaptive control
    • Customer support applications
      1. Airline logistics support
      2. Maintenance engineering
    • Finance applications
    • People applications
    • Facilities applications
    • Information systems applications
      1. Systems engineering
      2. Software engineering
      3. Web developer tools
      4. Integrated development environments
      5. Lifecycle categories
      6. Functional categories
      7. Specialty categories
    • Computer-aided manufacturing
    • e-Business enablement
    • Business process engineering
      1. Statistical quality control
  • Describe the process requirements for business application capabilities that are not met by the standard products.

Application Integration Approach

  • What integration points (business process/activity, application, data, computing environment) are targeted by this architecture?
  • What application integration techniques will be applied (common business objects [ORBs], standard data definitions [STEP, XML, etc.], common user interface presentation/desktop)?

Information Management Checklists

Data Values

  • What are the processes that standardize the management and use of the data?
  • What business process supports the entry and validation of the data? Use of the data?
  • What business actions correspond to the creation and modification of the data?
  • What business actions correspond to the deletion of the data and is it considered part of a business record?
  • What are the data quality requirements required by the business user?
  • What processes are in place to support data referential integrity and/or normalization?

Data Definition

  • What are the data model, data definitions, structure, and hosting options of purchased applications (COTS)?
  • What are the rules for defining and maintaining the data requirements and designs for all components of the information system?
  • What shareable repository is used to capture the model content and the supporting information for data?
  • What is the physical data model definition (derived from logical data models) used to design the database?
  • What software development and data management tools have been selected?
  • What data owners have been identified to be responsible for common data definitions, eliminating unplanned redundancy, providing consistently reliable, timely, and accurate information, and protecting data from misuse and destruction?

Security/Protection

  • What are the data entity and attribute access rules which protect the data from unintentional and unauthorized alterations, disclosure, and distribution?
  • What are the data protection mechanisms to protect data from unauthorized external access?
  • What are the data protection mechanisms to control access to data from external sources that temporarily have internal residence within the enterprise?

Hosting, Data Types, and Sharing

  • What is the discipline for managing sole-authority data as one logical source with defined updating rules for physical data residing on different platforms?
  • What is the discipline for managing replicated data, which is derived from operational sole-authority data?
  • What tier data server has been identified for the storage of high or medium-critical operational data?
  • What tier data server has been identified for the storage of type C operational data?
  • What tier data server has been identified for the storage of decision support data contained in a data warehouse?
  • What Database Management Systems (DBMSs) have been implemented?

Common Services

  • What are the standardized distributed data management services (e.g., validation, consistency checks, data edits, encryption, and transaction management) and where do they reside?
Access Method
  • What are the data access requirements for standard file, message, and data management?
  • What are the access requirements for decision support data?
  • What are the data storage and the application logic locations?
  • What query language is being used?

Security Checklist

  • Security Awareness: Have you ensured that the corporate security policies and guidelines to which you are designing are the latest versions? Have you read them? Are you aware of all relevant computing security compliance and risk acceptance processes? (Interviewer should list all relevant policies and guidelines.)
  • Identification/Authentication: Diagram the process flow of how a user is identified to the application and how the application authenticates that the user is who they claim to be. Provide supporting documentation to the diagram explaining the flow from the user interface to the application/database server(s) and back to the user. Are you compliant with corporate policies on accounts, passwords, etc.?
  • Authorization: Provide a process flow from beginning to end showing how a user requests access to the application, indicating the associated security controls and separation of duties. This should include how the request is approved by the appropriate data owner, how the user is placed into the appropriate access-level classification profile, how the user ID, password, and access is created and provided to the user. Also include how the user is informed of their responsibilities associated with using the application, given a copy of the access agreement, how to change password, who to call for help, etc.
  • Access Controls: Document how the user IDs, passwords, and access profiles are added, changed, removed, and documented. The documentation should include who is responsible for these processes.
  • Sensitive Information Protection: Provide documentation that identifies sensitive data requiring additional protection. Identify the data owners responsible for this data and the process to be used to protect storage, transmission, printing, and distribution of this data. Include how the password file/field is protected. How will users be prevented from viewing someone else’s sensitive information? Are there agreements with outside parties (partners, suppliers, contractors, etc.) concerning the safeguarding of information? If so, what are the obligations?
  • Audit Trails and Audit Logs: Identify and document group accounts required by the users or application support, including operating system group accounts. Identify and document individual accounts and/or roles that have superuser type privileges, what these privileges are, who has access to these accounts, how access to these accounts is controlled, tracked, and logged, and how password change and distribution are handled, including operating system accounts. Also identify audit logs, who can read the audit logs, who can modify the audit logs, who can delete the audit logs, and how the audit logs are protected and stored. Is the user ID obscured in the audit trails?
  • External Access Considerations: Will the application be used internally only? If not, are you compliant with corporate external access requirements?

System Management Checklist

  • What is the frequency of software changes that must be distributed?
  • What tools are used for software distribution?
  • Are multiple software and/or data versions allowed in production?
  • What is the user data backup frequency and expected restore time?
  • How are user accounts created and managed?
  • What is the system license management strategy?
  • What general system administration tools are required?
  • What specific application administration tools are required?
  • What specific service administration tools are required?
  • How are service calls received and dispatched?
  • Describe how the system is uninstalled.
  • Describe the process or tools available for checking that the system is properly installed.
  • Describe tools or instrumentation that are available that monitor the health and performance of the system.
  • Describe the tools or process in place that can be used to determine where the system has been installed.
  • Describe what form of audit logs are in place to capture system history, particularly after a mishap.
  • Describe the capabilities of the system to dispatch its own error messages to service personnel.

System Engineering/Overall Architecture Checklists

General

  • What other applications and/or systems require integration with yours?
  • Describe the integration level and strategy with each.
  • How geographically distributed is the user base?
  • What is the strategic importance of this system to other user communities inside or outside the enterprise?
  • What computing resources are needed to provide system service to users inside the enterprise? Outside the enterprise and using enterprise computing assets? Outside the enterprise and using their own assets?
  • How can users outside the native delivery environment access your applications and data?
  • What is the life expectancy of this application?
  • Describe the design that accommodates changes in the user base, stored data, and delivery system technology.
  • What is the size of the user base and their expected performance level?
  • What performance and stress test techniques do you use?
  • What is the overall organization of the software and data components?
  • What is the overall service and system configuration?
  • How are software and data configured and mapped to the service and system configuration?
  • What proprietary technology (hardware and software) is needed for this system?
  • Describe how each and every version of the software can be reproduced and re-deployed over time.
  • Describe the current user base and how that base is expected to change over the next three to five years.
  • Describe the current geographic distribution of the user base and how that base is expected to change over the next three to five years.
  • Describe how many current or future users need to use the application in a mobile capacity or who need to work off-line.
  • Describe what the application generally does, the major components of the application, and the major data flows.
  • Describe the instrumentation included in the application that allows for the health and performance of the application to be monitored.
  • Describe the business justification for the system.
  • Describe the rationale for picking the system development language over other options in terms of initial development cost versus long-term maintenance cost.
  • Describe the systems analysis process that was used to come up with the system architecture and product selection phase of the system architecture.
  • Who besides the original customer might have a use for or benefit from using this system?
  • What percentage of the users use the system in browse mode versus update mode?
  • What is the typical length of requests that are transactional?
  • Do you need guaranteed data delivery or update, or does the system tolerate failure?
  • What are the up-time requirements of the system?
  • Describe where the system architecture adheres or does not adhere to standards.
  • Describe the project planning and analysis approach used on the project.

Processors/Servers/Clients

  • Describe the client/server Application Architecture.
  • Annotate the pictorial to illustrate where application functionality is executed.

Client

  • Are functions other than presentation performed on the user device?
  • Describe the data and process help facility being provided.
  • Describe the screen-to-screen navigation technique.
  • Describe how the user navigates between this and other applications.
  • How is this and other applications launched from the user device?
  • Are there any inter-application data and process sharing capabilities? If so, describe what is being shared and by what technique/technology.
  • Describe data volumes being transferred to the client.
  • What are the additional requirements for local data storage to support the application?
  • What are the additional requirements for local software storage/memory to support the application?
  • Are there any known hardware/software conflicts or capacity limitations caused by other application requirements or situations which would affect the application users?
  • Describe how the look-and-feel of your presentation layer compares to the look-and-feel of the other existing applications.
  • Describe to what extent the client needs to support asynchronous and/or synchronous communication.
  • Describe how the presentation layer of the system is separated from other computational or data transfer layers of the system.

Application Server

  • Can/do the presentation layer and application layers run on separate processors?
  • Can/do the application layer and data access layer run on separate processors?
  • Can this application be placed on an application server independent of all other applications? If not, explain the dependencies.
  • Can additional parallel application servers be easily added? If so, what is the load balancing mechanism?
  • Has the resource demand generated by the application been measured and what is the value? If so, has the capacity of the planned server been confirmed at the application and aggregate levels?

Data Server

  • Are there other applications which must share the data server? If so, identify them and describe the data and data access requirements.
  • Has the resource demand generated by the application been measured and what is the value? If so, has the capacity of the planned server been confirmed at the application and aggregate levels?

COTS (where applicable)

  • Is the vendor substantial and stable?
  • Will the enterprise receive source code upon demise of the vendor?
  • Is this software configured for the enterprise’s usage?
  • Is there any peculiar A&D data or processes that would impede the use of this software?
    • Is this software currently available?
  • Has it been used/demonstrated for volume/availability/service-level requirements similar to those of the enterprise?
    • Describe the past financial and market share history of the vendor.

System Engineering/Methods & Tools Checklist

  • Do metrics exist for the current way of doing business?
  • Has the system owner created evaluation criteria that will be used to guide the project? Describe how the evaluation criteria will be used.
  • Has research of existing architectures been done to leverage existing work? Describe the method used to discover and understand. Will the architectures be integrated? If so, explain the method that will be used.
  • Describe the methods that will be used on the project:
    • For defining business strategies
    • For defining areas in need of improvement
    • For defining baseline and target business processes
    • For defining transition processes
    • For managing the project
    • For team communication
    • For knowledge management, change management, and configuration management
    • For software development
    • For referencing standards and statements of direction
    • For quality assurance of deliverables
    • For design reviews and deliverable acceptance
    • For capturing metrics
  • Are the methods documented and distributed to each team member?
  • To what extent are team members familiar with these methods?
  • What processes are in place to ensure compliance with the methods?
  • Describe the infrastructure that is in place to support the use of the methods through the end of the project and anticipated releases.
    • How is consultation and trouble-shooting provided?
    • How is training coordinated?
    • How are changes and enhancements incorporated and cascaded?
    • How are lessons learned captured and communicated?
  • What tools are being used on the project? (Specify versions and platforms). To what extent are team members familiar with these tools?
  • Describe the infrastructure that is in place to support the use of the tools through the end of the project and anticipated releases?
    • How is consultation and trouble-shooting provided?
    • How is training coordinated?
    • How are changes and enhancements incorporated and cascaded?
    • How are lessons learned captured and communicated?
  • Describe how the project will promote the re-use of its deliverables and deliverable content.
  • Will the architecture designs “live” after the project has been implemented? Describe the method that will be used to incorporate changes back into the architecture designs.
  • Were the current processes defined?
  • Were issues documented, rated, and associated to current processes? If not, how do you know you are fixing something that is broken?
  • Were existing/planned process improvement activities identified and associated to current processes? If not, how do you know this activity is not in conflict with or redundant to other Statements of Work?
  • Do you have current metrics? Do you have forecasted metrics? If not, how do you know you are improving something? – Load and Stress Testing results maybe?
  • What processes will you put in place to gather, evaluate, and report metrics?
  • What impacts will the new design have on existing business processes, organizations, and information systems? Have they been documented and shared with the owners?

Summary Checklist

General

  • What are the main stakeholders of the system.
  • Is the organisation ready for the transformation? TOGAF recommends you can check this with the Business Transformation Readiness Assessment.
  • What are the main actors that interact with the system?
  • What are the major business scenarios and the important requirements. Did you cover the:
    • regulatory & compliance requirements
    • security requirements
    • reporting requirements
    • data retention requirements
  • What other applications and/or systems require integration with yours? Does it require integration with:
    • Ordering system
    • CRM, Loyalty &  Commissioning
    • Billing (In case you have a new service, decide how you will bill it)
    • ERP
    • POS
    • BI & Analytics
    • Reporting & Data warehouse
    • Channels (Online, Mobile, wearables, APIs for partners,  IVR, Contact center, Store/Branch GUI, Partners/Resellers/Suppliers GUI, etc)
    • User behavior tracking (web & mobile analytics, UX tracking)
    • Operational & Performance monitoring
    • Audit & forensic investigation
  • Describe the integration level and strategy with each.
  • What are the SLAs and OLAs? What are the up-time requirements of the system? Does it need high availability?
  • How geographically distributed is the user base?
  • What is the strategic importance of this system to other user communities inside or outside the enterprise?
  • What computing resources are needed to provide system service to users inside the enterprise? Outside the enterprise and using enterprise computing assets? Outside the enterprise and using their own assets?
  • How can users outside the native delivery environment access your applications and data?
  • What is the life expectancy of this application?
  • Describe the design that accommodates changes in the user base, stored data, and delivery system technology.
  • What is the size of the user base and their expected performance level?
  • What performance and stress test techniques do you use?
  • What is the overall organization of the software and data components?
  • What is the overall service and system configuration?
  • How are software and data configured mapped to the service and system configuration?
  • What proprietary technology (hardware and software) is needed for this system?
  • Describe how each and every version of the software can be reproduced and re-deployed over time.
  • Describe the current user base and how that base is expected to change over the next 3 to 5 years.
  • Describe the current geographic distribution of the user base and how that base is expected to change over the next 3 to 5 years.
  • Describe the how many current or future users need to use the application in a mobile capacity or who need to work off-line.
  • Describe what the application generally does, the major components of the application and the major data flows.
  • Describe the instrumentation included in the application that allows for the health and performance of the application to be monitored.
  • Describe the business justification for the system.
  • Describe the rationale for picking the system development language over other options in terms of initial development cost versus long term maintenance cost.
  • Describe the systems analysis process that was used to come up with the system architecture and product selection phase of the system architecture.
  • Who besides the original customer might have a use for or benefit from using this system?
  • What percentage of the users use the system in browse mode versus update mode?
  • What is the typical length of requests that are transactional?
  • Do you need guaranteed data delivery or update, or the system tolerate failure?
  • Describe where the system architecture adheres or does not adhere to standards.
  • Describe the project planning and analysis approach used on the project.
  • Do you need to migrate users’ data from other systems? Does it require initial loads?
  • What is the licensee schema? What are the costs associated with system commissioning , both CAPEX and OPEX.
  • Are the component descriptions sufficiently precise?
    • Must allow independent construction.
    • Are interfaces and external functionality of the high-level components described in detail.
    • Avoid implementation details; do not describe each class in detail.
  • Are the relationships between the components explicitly documented? You can use a (sequence) diagram to represent the interaction between components.
  • Is the proposed solution realizable?
    • Can the components be implemented or bought, and then integrated together.
    • Possibly introduce a second layer of decomposition to get a better grip on realiability
  • Are all relevant architectural views documented?
    • Logical view (class diagram per component expresses functionality).
    • Process view (how control threads are set up, interact, evolve, and die).
    • Physical view (deployment diagram relates components to equipment).
    • Development view (how code is organized in files; could also be documented in SCMP appendix).
  • Are cross-cutting issues clearly and generally resolved?
    • Exception handling.
    • Initialization and reset.
    • Memory management.
    • Security.
    • Internationalization.
    • Built-in help.
    • Built-in test facilities.
    • Migration & Initial load
  • Have alternative architectures been sketched and has their evaluation been documented?
  • Have non-functional software requirements also been considered
  • Negative indicators:
    • High complexity: a component has a complex interface or functionality.
    • Low cohesion: a component contains unrelated functionality.
    • High coupling: two components have many (mutual) connections.
    • High fan-in: a component is needed by many other components.
    • High fan-out: a component depends on many other components.
  • Is the flexibility of the architecture demonstrated?
    • How can it cope with likely changes in the requirements?
    • Document the most relevant change scenarios.
  • What is the deployment approach. In case you have clients/mobile application how do you handle version and control diversity.
  • Areas of concern are separated.
  • Every component has a single responsibility.
  • Components do not rely on the internal details of other components.
  • Functionality is not duplicated within the architecture.
  • Components are grouped logically into layers.
  • Abstraction is used to design loose coupling between layers.

Cloud Architecture

When you design a new application or when you make an important update, please take into consideration if your application can be deployed/moved into cloud. Please evaluate if your application can benefits of cloud:

  • Distribution of your user base (are they located to a restricted territory or do you have global/regional usage)
  • Is your application capable of horizontal scaling?
  • Can you split your application in stateless or independent components?
  • How easy can you automate your infrastructure on the cloud (automatic scaling, self healing, etc)
  • Do you use containers?
  • Did you first consider the serverless architecture? Why your solution cannot run on this type of architecture?
  • Do you use edge caching or CDNs to distribute the content?
  • Did you address the security aspects of the services? How they are protected? Do you make use of a API GW and Access Manager capability to standardize the API security?
  • Do you want to focus less on the infrastructure and more on the application developments? Let the cloud providers manage the infrastructure and apply the world class security to it and start focusing on things that matters to your business and your application/product.

Application architectures and tiers/layers

  • Describe the application architecture;
  • Annotate the pictorial to illustrate where application functionality is executed.
  • Can the application tiers be separated on different machines?
  • Layers represent a logical grouping of components. For example, use separate layers for user interface, business logic, and data access components.
  • Components within each layer are cohesive. For example, the business layer components should provide only operations related to application business logic.
  • Authentication
    • Trust boundaries have been identified, and users are authenticated across trust boundaries.
    • Single sign-on is used when there are multiple systems in the application.
    • Passwords are stored as a salted hash, not plain text.
    • Strong passwords or password phrases are enforced.
    • Passwords are not transmitted in plain text.
  •  Authorization
    • Trust boundaries have been identified, and users are authorized across trust boundaries.
    • Resources are protected with authorization on identity, group, claims or role.
    • Role-based authorization is used for business decisions.
    • Resource-based authorization is used for system auditing.
    • Claims-based authorization is used for federated authorization based on a mixture of information such as identity, role, permissions, rights, and other factors.
  • Concurrency and Transactions
    • Business-critical operations are wrapped in transactions.
    • Connection-based transactions are used in the case of a single data source.
    • Transaction Scope (System.Transaction) is used in the case of multiple data sources.
    • Compensating methods are used to revert the data store to its previous state when transactions are not used.
    • Locks are not held for long periods during long-running atomic transactions.
  •  Caching
    • Volatile data is not cached.
    • Data is cached in ready to use format.
    • Unencrypted sensitive data is not cached.
    • Transactional resource manager or distributed caching is used, if your application is deployed in Web farm.
    • Your application does not depend on data still being in cache.
  • Coupling and Cohesion
    • Application is partitioned into logical layers.
    • Layers use abstraction through interface components, common interface definitions, or shared abstraction to provide loose coupling between layers.
    • The components inside layers are designed for tight coupling, unless dynamic behavior requires loose coupling.
    • Each component only contains functionality specifically related to that component.
    • The trade offs of abstraction and loose coupling are well understood for your design. For instance, it adds overhead but it simplifies the build process and improves maintainability.
  • Validation
    • Validation is performed both at presentation and business logic layer
    • Trust boundaries are identified, and all the inputs are validated when they cross the trust boundary.
    • A centralized validation approach is used.
    • Validation strategy constrains, rejects, and sanitizes malicious input.
    • Input data is validated for length, format, and type.
    • Client-side validation is used for user experience and server-side validation is used for security.
  • Configuration Management
    • Least-privileged process and service accounts are used.
    • All the configurable application information is identified.
    • Sensitive information in the configuration is encrypted.
    • Access to configuration information is restricted.
    • If there is a configuration UI, it is provided as a separate administrative UI.

Client/Presentation tier

  • Are functions other than presentation performed on the user device?
  • Describe the data and process help facility being provided.
  • Describe the screen to screen navigation technique.
  • Describe how the user navigates between this and other applications.
  • How is this and other applications launched from the user device?
  • Are there any inter-application data and process sharing capabilities? If so, describe what is being shared and by what technique / technology.
  • Describe data volumes being transferred to the client.
  • What are the additional requirements for local data storage to support the application?
  • What are the additional requirements for local software storage/memory to support the application?
  • Did you consider caching on client device?
  • Are there any known hardware / software conflicts or capacity limitations caused by other application requirements or situations, which would affect the application users?
  • Describe how the look and feel of your presentation layer compares to the look and feel of the other existing applications.
  • Describe to what extent the client needs to support asynchronous and / or synchronous communication.
  • Describe how the presentation layer of the system is separated from other computational or data transfer layers of the system.
  • Are the wireframes/mockups available?
  • Can it access static content from other locations? Can it access data from CDN?

Business logic layer

  • Can/does the presentation layer and business logic layers run on separate processors?
  • Can/does the business logic layer and data access layer run on separate processors?
  • Can this business logic be placed on an application server independent of all other applications? If not, explain the dependencies.
  • Can additional parallel application servers be easily added? If so, what is the load balancing mechanism?
  • Has the resource demand generated by the business logic been measured and what is the value? If so, has the capacity of the planned server been confirmed at the application and aggregate levels?
  • Does it require shared storage across nodes?

Data Access Layer

  • Database schema is not coupled to your application model.
  • Connections are opened as late as possible and released quickly.
  • Data integrity is enforced in the database, not in the data access layer.
  • Business decisions are made in the business layer, not the data access layer.
  • Database is not directly accessed; database access is routed through the data access layer.
  • Resource gateways are used to access resources outside the application.

Data layer

  • Are there other applications, which must share the data server? If so, please identify them and describe the data and data access requirements.
  • Has the resource demand generated by the application been measured and what is the value? If so, has the capacity of the planned server been confirmed at the application and aggregate levels?
  • Does the database support collocation on a DB cluster?
  • What relational database management system does your application support: Oracle, MS SQL, MySQL, DB2, Sybase, etc
  • Does your application use/require NoSQL?

 Hardware, Network & OS requirements

  • What are the hardware requirements? Machines, CPU, RAM, Storage;
  • What environments are required, for example: Testing, Development, etc;
  • Does it support virtualization? What virtualization technology can be used, e.g. VMWare.
  • Does the architecture be deployed in cloud? Private or Public cloud? Is there a legal requirement to host and process data in certain territories?
  • What are the OS requirements?
  • What are the 3rd party software requirements? Do they require licensees?
  • Do you need agents to monitor the machine/application?
  • Does it require balancing?
  • Does it require session persistence?
  • Do we have enough network capacity (ports, bandwidth) for all network elements: switches, routers, etc

COTS (where applicable)

  • Is the vendor substantial and stable?
  • Will the enterprise receive source code upon demise of the vendor?
  • Is this software configured for the enterprise’s usage?
  • Is there any peculiar A&D data or processes that would impede the use of this software?
    • Is this software currently available?
  • Has it been used/demonstrated for volume/availability/service level requirements similar to those of the enterprise?
    • Describe the past financial and market share history of the vendor.

Business readiness

  • Are the internal policies updated?
  • Are the Customer Supports Agents & Sales Agents trained on the new solution?
  • Is the documentation updated?
  • In case of a new system, is it formally handover to the Ops team?
  • Are all the compliance/requirements requirements met?

Hits: 643

Review your Application Architecture Today !

Abstract

Application architecture review can be defined as reviewing the current security controls in the application architecture. This helps a user to identify potential security flaws at an early stage and mitigate them before starting the development stage. Poor design of architecture may expose the application to many security loopholes. It is preferable to perform the architecture review at the design stage, as the cost and effort required for implementing security after development is high.

This document can be considered as the secure design guideline for the architects or as a checklist for penetration tester to perform application architecture review as a part of the overall security assessment.

The following diagram shows some of the primary issues that must be addressed at the designing stage.

While doing the architecture review we can primarily focus on the following areas:

  1. Application Architecture Documents
  2. Deployment and Infrastructure Considerations
  3. Input Validation
  4. Authentication
  5. Authorization
  6. Configuration Management
  7. Session Management
  8. Cryptography
  9. Parameter Manipulation
  10. Exception Management
  11. Auditing & Logging
  12. Application Framework and Libraries

Additional category or points under any category can be added as per the requirement. Let’s have a look at each area separately:

Application Architecture Documents:

The first thing to look for is the availability of the application architecture document. Every application should have a properly documented architecture diagram with a high-level explanation of the above points and a network connectivity diagram showing how different component are placed and secured.

Deployment and Infrastructure Considerations:

Review the infrastructure on which the application is deployed. This can include reviewing the network, system, infrastructure performance monitoring, etc.

Some of the points which should be taken into consideration are as follows:

  • Components required for the application: What is the OS supporting the application, hardware requirement, etc.?
  • Restrictions applied on the firewall: Review the firewall policies defined for the application. What type of traffic is allowed and what type of traffic is blocked?
  • Port and Service requirement: An application may communicate with other application as well. Identify which ports and services are required to be open for the application.
  • Component Segregation: Components of the application should be segregated from each other. For example, the application server and database server should not reside in the same machine.
  • Disable clear text protocol: Ports running clear text services like (HTTP, FTP, etc.) should be closed and not used for any part of the application.

Input Validation

Weak input validation is one of the main reasons for application security weakness. Proper input validation can help in preventing many attacks like Cross-Site Scripting, SQL Injection, etc. The validation should be applied on every input field (including the hidden form field) of all the pages. The best practice is to use a centralized approach.

Some of the points which should be taken into consideration are as follows:

  • Mechanism to validate the user inputs: Check if the application is validating the user input or processing the input as it is.
  • Bypassing the validation: Check how the user input is being validated. Is it possible to bypass the validation, for example, encoding the input? Identify if input validation depends on the application framework. Check whether there is any vulnerability in the framework through which a user can bypass the validation.
  • Centralized approach: If custom implementation to validate the user input is present, check whether the approach is centralized.
  • Validating across all the tiers: As a best practice, validation should be applied on all the layers, i.e. business layer, data layer, etc.
  • Addressing SQL Injection issue: Input validation helps in mitigating SQL Injection issue to some extent. Check whether the application is safe against the SQL Injection vulnerability by using parameterized query in the back end.

Authentication

Authentication is the act of verifying a user’s identity. In the application, it is achieved by providing the username and password. Weak authentication mechanism can result in bypassing the login process and accessing the application. This can lead to a major compromise. The application should be designed with strong authentication.

Some of the points which should be taken into consideration are as follows:

  • Authentication control on the server side: Make sure the credentials are verified at the server side and not on the client side. Client-side validation can be easily bypassed.
  • Secure channel for authentication: Login credentials should always be sent through an encrypted channel. Credentials going through a clear text channel can be easily sniffed by the attacker.
  • Check whether the login page is served over HTTP protocol. Check whether the application can be accessible on any other port where SSL certificate is not implemented.
  • Change password page: Check whether the Old Password field is present in the Change Password page and verified as well.
  • Strong password policy: An application should be configured to accept strong passwords only. Weak passwords can be brute forced easily.
  • Authentication cookie: Check if SSL is implemented on an entire application and the authentication cookie is not sent in clear text on any page.
  • Service account: Service account is an account under which a service application is running. A service account is required for the application and communicating with the database should have a restrictive set of privilege.
  • Default framework password: Many application framework comes with a default password. Make sure the password is changed to a non-guessable strong password.

Authorization

Authorization determines which resources can be accessed by the authenticated user. Weak authorization control can lead to Privileges Escalation attacks.

Some of the points which should be taken into consideration are as follows:

  • Privilege escalation and spoofing: Privilege escalation happens when a user gets access to more resources than they are allowed or when the user can perform additional actions than they are allowed. Check the controls present when the user tries to escalate his/her privilege through manipulating the request or by directly accessing the unauthorized page/resource.
  • Direct object reference: Check whether the application provides direct access to the objects based on the user-supplied input. This may allow the attacker to access the resources belonging to other users by bypassing the authorization. For example, downloading the invoices/statement of other users.

Configuration Management

Weak configuration should be avoided. Any sensitive information being stored in the configuration file can be extracted by the attacker.

Some of the points which should be taken into consideration are as follows:

  • Secure Hardening: Make sure all the components required for the application are updated and the latest patches are applied on them. The default configuration should be changed wherever possible.
  • Sensitive Data: Sensitive data like database connection string, encryption key, admin credentials or any other secret should not be stored as clear text in the code. Check whether the configuration file is secured against the unauthorized access.
  • Persistent cookie: Storing sensitive data as plaintext in a persistent cookie should be avoided. The user can see and modify the clear text data. Check if the application is storing the clear text data in a persistent cookie.
  • Passing sensitive data using GET protocol: The GET protocol sends the data in the query string. Sensitive information going in a GET request can be accessed from the browser history or logs.
  • Disable unused methods: Verify that the application accepts only GET and POST methods. Other methods like TRACE, PUT, DELETE, etc. should be disabled.
  • Sensitive data over HTTP: Communication between the components, such as between application server and database server, should be encrypted.

Session Management

The session is the track of the user activities. Strong session management plays an important role in the overall security of the application. Weakness in the session management may lead to serious attacks.

Some of the points which should be taken into consideration are as follows:

  • Use framework’s default session management: Custom session management can have multiple vulnerabilities. Ensure there is no custom manager is being used and the application framework’s default session management is used.
  • Ensure session management best practices are followed:
    • The session ID is random, long and unique.
    • The session is invalidating after logout.
    • The session ID is changing on successful authentication and re-authentication.
    • The session ID is not going in URL.
    • The session is timed out after a certain period of inactivity.
    • The session ID is going insecure channel like SSL.
    • Check whether cookie attributes (HttpOnly, Secure, path and domain) are secured.

Cryptography

Applications frequently use cryptography to secure the stored data or to protect the data in transit over an insecure channel.

Some of the points which should be taken into consideration are as follows:

  • Custom Implementation: Designing a dedicated encryption mechanism may lead to weaker protection. Secure cryptographic service provided by the platform should be used. Check what type of encryption is being used in the application.
  • Encryption Key management: Check whether there is any policy on encryption key management, i.e., Key generation, distribution, deletion, and expiration.
  • Securing encryption keys: Encryption keys are used as an input to encrypt or decrypt the data. If encryption keys are compromised, the encrypted data can be decrypted and hence, will be no longer be secure.
  • Key recycle policy: The key should be recycled after a certain period. Using the same key for a long time is not a safe practice.

Parameter Manipulation

With Parameter Manipulation attacks, the attacker modifies the data going from application to the web server. This can result in unauthorized access to services.

Some of the points which should be taken into consideration are as follows:

  • Validate all inputs from the client: Validation applied on the client side may reduce the load on the server but relying only on client-side validation is not a safe practice. Client-side validation can be bypassed using a proxy tool. Check whether validation is applied on the server as well.
  • Do not rely on HTTP header: Security decision in the application should not be based on the HTTP header. If the application is serving any page by checking only the “referrer” header, then an attacker can bypass this by changing the header in a proxy tool.
  • Encrypt the cookie: Cookies can have data that is being used on the server to authorize the user. This type of data should be protected against unauthorized manipulation attacks.
  • Sensitive data in view state: View state in an ASP.NET application can have sensitive data that is required to take the authorization decision on the server. Data in view state can be tempered if MAC (message authentication code) is not enabled. Check whether view state is protected using MAC.

Exception Management

Insecure exception handling can expose the valuable information, which can be used by the attacker to fine-tune his attack. Without exception management, information such as stack trace, framework details, server details, SQL query, internal path and other sensitive information can be exposed. Check whether centralized exception management is in place, with minimum information being displayed.

Auditing & Logging

Log files contain the record of the events. These events can be a success or failed login attempt, data retrieval, modification, deletion, network communication, etc. The logs should be monitored in real time.

Some of the points which should be taken into consideration are as follows:

  • Logging enabled: Check if logging is enabled for application and platform level as well.
  • Log events: Logs should be generated for all security-level important events like successful and failed authentication, data access, modification, network access, etc. The log should include time of the event, user identity, location with machine name, etc. Identify which events are being logged.
  • Logging sensitive data: An application should not log sensitive data like user credentials, password hashes, credit card details, etc.
  • Storage, security, and analysis:
    • The log file should be stored on a different partition than the one on which the application is running. The log file should be copied and moved to permanent storage for retention.
    • The log files must be protected against the unauthorized access, modification or deletion.
    • The log file should be analyzed on a regular interval.

Application Framework and Libraries

Make sure that the application framework and libraries are up to date and relevant patches are applied on them. Verify that there is no default password being used in the framework (admin/admin, tomcat/tomcat, etc.). Check whether the old or vulnerable framework is in use.

Conclusion

The above points represent the key areas for secure designing of the application. Implementing these points at the designing stage can reduce the overall cost and effort to secure the application. If the application is already deployed, secure architecture review is an important part of the overall security assessment and can help in fixing the existing vulnerabilities and improving the future design.

Hits: 2343

What is Merkle Tree in Blockchain?

What’s A Merkle Tree?

If you’re a newcomer to the blockchain world, you may have come across the phrase “Merkle Tree” and felt a little lost. While Merkle Trees are not a widely-understood concept, they’re also not terribly complicated.

So, what’s a Merkle Tree? To put it very simply, a Merkle Tree is a method of structuring data that allows a large body of information to be verified for accuracy extremely quickly and efficiently.

Since Merkle Trees are such a crucial component of blockchain technology, it’s worth gaining an in-depth understanding of them. This post will help you do just that. Let’s get started.

All About Merkle Trees

The story of Merkle Trees begins way back in 1979 with a person named Ralph Merkle. While in grad school at Stanford University, Merkle wrote an academic paper called “A Certified Digital Signature.” In this essay, Merkle described a new, extremely efficient method of creating proofs. In other words, he designed a process for verifying data that would allow computers to do their work much, much faster than ever before.

Merkle’s idea, now better known as a Merkle Tree, revolutionised the world of cryptography and, by extension, the way that encrypted computer protocols function. In fact, Merkle Trees are mentioned repeatedly in Satoshi Nakamoto’s 2008 essay that introduced Bitcoin to the world. They’re also used extensively in Bitcoin’s foundational code.

So, what exactly are Merkle Trees? First, it’s important to note that each transaction on a blockchain has its own unique transaction ID. With most blockchains, each transaction ID is a 64-character code (SHA2 Hash) that takes up 256 bits (32 bytes) of memory.

When you consider that blockchains are typically made up of hundreds of thousands of blocks, with each block containing as many as several thousand transactions, you can imagine how quickly memory space becomes a problem.

As such, it’s optimal to use as little data as possible when processing and verifying transactions. This minimises CPU processing times while also ensuring the highest level of security.

Well, that’s exactly what Merkle Trees do. To put it very simply, Merkle Trees take a huge number of transaction IDs and put them through a mathematical process that results in a single, 64-character code (SHA2 Hash).

This code is extremely important because it allows any computer to quickly verify that a specific transaction took place on a particular block as efficiently as possible. This code is called the Merkle Root.

What’s A Merkle Root?

The single code that a Merkle Tree produces is called the Merkle Root. Each block in a blockchain has exactly one. And, as we just mentioned, the Merkle Root is a crucial piece of data because it allows computers to verify information with incredible speed and efficiency.

Let’s dive a little deeper. How is a Merkle Root produced? The first step is to organise all of the data inputs.

Merkle Trees, by design, always group all of the inputs into pairs. If there is an odd number of inputs, the last input is copied and then paired with itself. This holds true for all the transaction IDs written onto a block of a blockchain.

Example:

EvenNumber: 12345678 == 11223344 & 55667788

OddNumber: 1234567 == 12345677 == 11223344 & 55667777

For instance, let’s suppose that a single block contains a total of 512 transactions. The Merkle Tree would begin by grouping those 512 transactions IDs into 256 pairs. Then, those 256 pairs of transaction IDs would go through a mathematical process, called a hashing function or hashing algorithm, that would result in 256 new, 64-character alphanumeric codes.

The same exact process would occur again. Those 256 new codes would be paired up and turned into 128 codes. The process would be repeated, cutting the number of codes in half each time, until only a single code remained. That single code is our Merkle Root.

An Example Of A Merkle Tree

To make this concept clear, let’s look at a very simple example of a Merkle Tree. Imagine that there were 8 transactions performed on one particular block. In reality, transaction IDs are 64 characters long, but for the sake of simplicity, let’s pretend that they’re only 8 characters long. To make things even easier, let’s use only numbers (and ignore letters altogether).

So, in this example, our eight transactions IDs will be:

  • 11111111
  • 22222222
  • 33333333
  • 44444444
  • 55555555
  • 66666666
  • 77777777
  • 88888888

Now let’s suppose that the method for hashing transaction IDs together is to take the first, third, fifth, and seventh digits from each of the two IDs being combined, and then simply push those numbers together to form a new, 8-digit code.

Of course, in reality, the mathematics behind hashing algorithms is far more complicated than this. But for this simple demonstration, this elementary system will suffice.

This is what our Merkle Tree would look like:

Notice that the number of codes is cut in half each step down the Merkle Tree. We start with 8 transaction IDs and, after just 3 steps, end up with a single code—the Merkle Root. In this example, our Merkle Root is the code in the bottom box: 12345678.

The primary benefit of Merkle Trees is that they allow extremely quick verification of data. If we want to validate a single transaction ID, we wouldn’t need to double-check every single transaction on the block. Rather, we would only need to verify that particular “branch” of our Merkle Tree.

Efficiency And Speed: The Benefits Of Merkle Trees

Let’s suppose that we want to validate a transaction ID in our current example. Suresh says that he paid Praveen a certain sum of Bitcoin and tells us that the transaction ID is 88888888. He also sends us 3 hashes: 77777777, 55556666, and 11223344. That’s all the info that needs to be sent or received to verify Suresh’s payment to Praveen.

These three hashes, along with the transaction ID in question and this particular block’s Merkle Root, are the only data needed to verify Suresh’s payment to Praveen. This is far less data than what would be required to verify the entire Merkle Tree. As a result, the verification process is much faster and far more efficient for everyone.

Here’s how it works. We already have the block’s Merkle Root, so Suresh doesn’t need to send us that. He sends us his transaction ID and the 3 additional hashes we listed above. He also sends a tiny bit of information about the order and placement in which to use the hashes. Now, all we have to do is run the hashing algorithm on the set of data Suresh provided.

We start by hashing the first code 77777777 with the transaction ID 88888888, which gives us the result 77778888. Suresh didn’t send us this code but he didn’t need to because we’re using the same hashing algorithm as him. Therefore, we receive the exact same results.

We then take the second code Suresh sent us, 55556666, and hash it with the new code 77778888 we just derived. This, of course, produces the number 55667788.

Finally, we hash the third code Suresh gave us, 11223344, with the other new code we received, 55667788, and we end up with the correct Merkle Root: 12345678.

Notice that we only need 3 codes from Suresh and only had to run the hashing algorithm three times to see that Suresh’s transaction is valid. That means our computer has done less than half the work that would’ve been required to verify the entire Merkle Tree. The original Merkle Tree diagram has 15 numbers and the hashing algorithm needs to be run 7 times. But more than half of that tree isn’t necessary to verify Suresh’s transaction!

This procedure is sufficient to verify that Suresh did, in fact, pay Praveen that certain sum of Bitcoin because we derived numbers that, when hashed together with the other codes Suresh sent us, produced the same Merkle Root that we already knew to be true for this particular block.

Suresh can’t fake a transaction because that would require finding a fake transaction ID and an additional set of fake codes that, when put through the hashing function, would produce the true Merkle Root. The chances of this happening are so astronomically small that we can confidently say it’s impossible.

In this simple example, the savings of computing power might not seem substantial. However, when you consider that blocks in a blockchain might contain several thousand transactions, it’s easy to see how Merkle Trees increase efficiency so dramatically.

In short, that’s the main benefit of a Merkle Tree. It allows computers to verify information extremely efficiently and with far less data than what would be required without the Merkle Tree.

Hits: 1064

The RIGHT way of Password Hashing !

If you’re a web developer, you’ve probably had to make a user account system. The most important aspect of a user account system is how user passwords are protected. User account databases are hacked frequently, so you absolutely must do something to protect your users’ passwords if your website is ever breached. The best way to protect passwords is to employ salted password hashing. This page will explain why it’s done the way it is.

There are a lot of conflicting ideas and misconceptions on how to do password hashing properly, probably due to the abundance of misinformation on the web. Password hashing is one of those things that’s so simple, but yet so many people get wrong. With this page, I hope to explain not only the correct way to do it, but why it should be done that way.

IMPORTANT WARNING: If you are thinking of writing your own password hashing code, please don’t!. It’s too easy to screw up. No, that cryptography course you took in university doesn’t make you exempt from this warning. This applies to everyone: DO NOT WRITE YOUR OWN CRYPTO! The problem of storing passwords has already been solved. Use either use either phpass, the PHP, C#, Java, and Ruby implementations in defuse/password-hashing, or libsodium.

If for some reason you missed that big red warning note, please go read it now. Really, this guide is not meant to walk you through the process of writing your own storage system, it’s to explain the reasons why passwords should be stored a certain way

What is password hashing?

hash("hello") = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hbllo") = 58756879c05c68dfac9866712fad6a93f8146f337a69afe7dd238f3364946366
hash("waltz") = c0e81794384491161f1777c232bc6bd9ec38f616560b120fda8e90f383853542

Hash algorithms are one way functions. They turn any amount of data into a fixed-length “fingerprint” that cannot be reversed. They also have the property that if the input changes by even a tiny bit, the resulting hash is completely different (see the example above). This is great for protecting passwords, because we want to store passwords in a form that protects them even if the password file itself is compromised, but at the same time, we need to be able to verify that a user’s password is correct.

The general workflow for account registration and authentication in a hash-based account system is as follows:

  1. The user creates an account.
  2. Their password is hashed and stored in the database. At no point is the plain-text (unencrypted) password ever written to the hard drive.
  3. When the user attempts to login, the hash of the password they entered is checked against the hash of their real password (retrieved from the database).
  4. If the hashes match, the user is granted access. If not, the user is told they entered invalid login credentials.
  5. Steps 3 and 4 repeat every time someone tries to login to their account.

In step 4, never tell the user if it was the username or password they got wrong. Always display a generic message like “Invalid username or password.” This prevents attackers from enumerating valid usernames without knowing their passwords.

It should be noted that the hash functions used to protect passwords are not the same as the hash functions you may have seen in a data structures course. The hash functions used to implement data structures such as hash tables are designed to be fast, not secure. Only cryptographic hash functions may be used to implement password hashing. Hash functions like SHA256, SHA512, RipeMD, and WHIRLPOOL are cryptographic hash functions.

It is easy to think that all you have to do is run the password through a cryptographic hash function and your users’ passwords will be secure. This is far from the truth. There are many ways to recover passwords from plain hashes very quickly. There are several easy-to-implement techniques that make these “attacks” much less effective. To motivate the need for these techniques, consider this very website. On the front page, you can submit a list of hashes to be cracked, and receive results in less than a second. Clearly, simply hashing the password does not meet our needs for security.

The next section will discuss some of the common attacks used to crack plain password hashes.

How Hashes are Cracked

  • Dictionary and Brute Force Attacks

    Dictionary Attack

    Trying apple        : failed
    Trying blueberry    : failed
    Trying justinbeiber : failed
    ...
    Trying letmein      : failed
    Trying s3cr3t       : success!
    Brute Force Attack

    Trying aaaa : failed
    Trying aaab : failed
    Trying aaac : failed
    ...
    Trying acdb : failed
    Trying acdc : success!
  • The simplest way to crack a hash is to try to guess the password, hashing each guess, and checking if the guess’s hash equals the hash being cracked. If the hashes are equal, the guess is the password. The two most common ways of guessing passwords are dictionary attacks and brute-force attacks.A dictionary attack uses a file containing words, phrases, common passwords, and other strings that are likely to be used as a password. Each word in the file is hashed, and its hash is compared to the password hash. If they match, that word is the password. These dictionary files are constructed by extracting words from large bodies of text, and even from real databases of passwords. Further processing is often applied to dictionary files, such as replacing words with their “leet speak” equivalents (“hello” becomes “h3110”), to make them more effective.A brute-force attack tries every possible combination of characters up to a given length. These attacks are very computationally expensive, and are usually the least efficient in terms of hashes cracked per processor time, but they will always eventually find the password. Passwords should be long enough that searching through all possible character strings to find it will take too long to be worthwhile.There is no way to prevent dictionary attacks or brute force attacks. They can be made less effective, but there isn’t a way to prevent them altogether. If your password hashing system is secure, the only way to crack the hashes will be to run a dictionary or brute-force attack on each hash.
  • Lookup Tables

    Searching: 5f4dcc3b5aa765d61d8327deb882cf99: FOUND: password5
    Searching: 6cbe615c106f422d23669b610b564800:  not in database
    Searching: 630bf032efe4507f2c57b280995925a9: FOUND: letMEin12 
    Searching: 386f43fab5d096a7a66d67c8f213e5ec: FOUND: mcd0nalds
    Searching: d5ec75d5fe70d428685510fae36492d9: FOUND: p@ssw0rd!

    Lookup tables are an extremely effective method for cracking many hashes of the same type very quickly. The general idea is to pre-compute the hashes of the passwords in a password dictionary and store them, and their corresponding password, in a lookup table data structure. A good implementation of a lookup table can process hundreds of hash lookups per second, even when they contain many billions of hashes.

    If you want a better idea of how fast lookup tables can be, try cracking the following sha256 hashes with CrackStation’s free hash cracker.

    c11083b4b0a7743af748c85d343dfee9fbb8b2576c05f3a7f0d632b0926aadfc
    08eac03b80adc33dc7d8fbe44b7c7b05d3a2c511166bdb43fcb710b03ba919e7
    e4ba5cbd251c98e6cd1c23f126a3b81d8d8328abc95387229850952b3ef9f904
    5206b8b8a996cf5320cb12ca91c7b790fba9f030408efe83ebb83548dc3007bd
  • Reverse Lookup Tables

    Searching for hash(apple) in users' hash list...     : Matches [alice3, 0bob0, charles8]
    Searching for hash(blueberry) in users' hash list... : Matches [usr10101, timmy, john91]
    Searching for hash(letmein) in users' hash list...   : Matches [wilson10, dragonslayerX, joe1984]
    Searching for hash(s3cr3t) in users' hash list...    : Matches [bruce19, knuth1337, john87]
    Searching for hash(z@29hjja) in users' hash list...  : No users used this password

    This attack allows an attacker to apply a dictionary or brute-force attack to many hashes at the same time, without having to pre-compute a lookup table.

    First, the attacker creates a lookup table that maps each password hash from the compromised user account database to a list of users who had that hash. The attacker then hashes each password guess and uses the lookup table to get a list of users whose password was the attacker’s guess. This attack is especially effective because it is common for many users to have the same password.

  • Rainbow Tables

    Rainbow tables are a time-memory trade-off technique. They are like lookup tables, except that they sacrifice hash cracking speed to make the lookup tables smaller. Because they are smaller, the solutions to more hashes can be stored in the same amount of space, making them more effective. Rainbow tables that can crack any md5 hash of a password up to 8 characters long exist.

Next, we’ll look at a technique called salting, which makes it impossible to use lookup tables and rainbow tables to crack a hash.

Adding Salt

hash("hello")                    = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
hash("hello" + "QxLUF1bgIAdeQX") = 9e209040c863f84a31e719795b2577523954739fe5ed3b58a75cff2127075ed1
hash("hello" + "bv5PehSMfV11Cd") = d1d3ec2e6f20fd420d50e2642992841d8338a314b8ea157c9e18477aaef226ab
hash("hello" + "YYLmfY6IehjZMQ") = a49670c3c18b9e079b9cfaf51634f563dc8ae3070db2c4a8544305df1b60f007

Lookup tables and rainbow tables only work because each password is hashed the exact same way. If two users have the same password, they’ll have the same password hashes. We can prevent these attacks by randomizing each hash, so that when the same password is hashed twice, the hashes are not the same.

We can randomize the hashes by appending or prepending a random string, called a salt, to the password before hashing. As shown in the example above, this makes the same password hash into a completely different string every time. To check if a password is correct, we need the salt, so it is usually stored in the user account database along with the hash, or as part of the hash string itself.

The salt does not need to be secret. Just by randomizing the hashes, lookup tables, reverse lookup tables, and rainbow tables become ineffective. An attacker won’t know in advance what the salt will be, so they can’t pre-compute a lookup table or rainbow table. If each user’s password is hashed with a different salt, the reverse lookup table attack won’t work either.

In the next section, we’ll look at how salt is commonly implemented incorrectly.

The WRONG Way: Short Salt & Salt Reuse

The most common salt implementation errors are reusing the same salt in multiple hashes, or using a salt that is too short.

Salt Reuse

A common mistake is to use the same salt in each hash. Either the salt is hard-coded into the program, or is generated randomly once. This is ineffective because if two users have the same password, they’ll still have the same hash. An attacker can still use a reverse lookup table attack to run a dictionary attack on every hash at the same time. They just have to apply the salt to each password guess before they hash it. If the salt is hard-coded into a popular product, lookup tables and rainbow tables can be built for that salt, to make it easier to crack hashes generated by the product.

A new random salt must be generated each time a user creates an account or changes their password.

Short Salt

If the salt is too short, an attacker can build a lookup table for every possible salt. For example, if the salt is only three ASCII characters, there are only 95x95x95 = 857,375 possible salts. That may seem like a lot, but if each lookup table contains only 1MB of the most common passwords, collectively they will be only 837GB, which is not a lot considering 1000GB hard drives can be bought for under $100 today.

For the same reason, the username shouldn’t be used as a salt. Usernames may be unique to a single service, but they are predictable and often reused for accounts on other services. An attacker can build lookup tables for common usernames and use them to crack username-salted hashes.

To make it impossible for an attacker to create a lookup table for every possible salt, the salt must be long. A good rule of thumb is to use a salt that is the same size as the output of the hash function. For example, the output of SHA256 is 256 bits (32 bytes), so the salt should be at least 32 random bytes.

The WRONG Way: Double Hashing & Wacky Hash Functions

This section covers another common password hashing misconception: wacky combinations of hash algorithms. It’s easy to get carried away and try to combine different hash functions, hoping that the result will be more secure. In practice, though, there is very little benefit to doing it. All it does is create interoperability problems, and can sometimes even make the hashes less secure. Never try to invent your own crypto, always use a standard that has been designed by experts. Some will argue that using multiple hash functions makes the process of computing the hash slower, so cracking is slower, but there’s a better way to make the cracking process slower as we’ll see later.

Here are some examples of poor wacky hash functions I’ve seen suggested in forums on the internet.

md5(sha1(password))
-------
md5(md5(salt) + md5(password))
-------
sha1(sha1(password))
-------
sha1(str_rot13(password + salt))
-------
md5(sha1(md5(md5(password) + sha1(password)) + md5(password)))

Do not use any of these.

Note: This section has proven to be controversial. I’ve received a number of emails arguing that wacky hash functions are a good thing, because it’s better if the attacker doesn’t know which hash function is in use, it’s less likely for an attacker to have pre-computed a rainbow table for the wacky hash function, and it takes longer to compute the hash function.

An attacker cannot attack a hash when he doesn’t know the algorithm, but note Kerckhoffs’s principle, that the attacker will usually have access to the source code (especially if it’s free or open source software), and that given a few password-hash pairs from the target system, it is not difficult to reverse engineer the algorithm. It does take longer to compute wacky hash functions, but only by a small constant factor. It’s better to use an iterated algorithm that’s designed to be extremely hard to parallelize (these are discussed below). And, properly salting the hash solves the rainbow table problem.

If you really want to use a standardized “wacky” hash function like HMAC, then it’s OK. But if your reason for doing so is to make the hash computation slower, read the section below about key stretching first.

Compare these minor benefits to the risks of accidentally implementing a completely insecure hash function and the interoperability problems wacky hashes create. It’s clearly best to use a standard and well-tested algorithm.

Hash Collisions

Because hash functions map arbitrary amounts of data to fixed-length strings, there must be some inputs that hash into the same string. Cryptographic hash functions are designed to make these collisions incredibly difficult to find. From time to time, cryptographers find “attacks” on hash functions that make finding collisions easier. A recent example is the MD5 hash function, for which collisions have actually been found.

Collision attacks are a sign that it may be more likely for a string other than the user’s password to have the same hash. However, finding collisions in even a weak hash function like MD5 requires a lot of dedicated computing power, so it is very unlikely that these collisions will happen “by accident” in practice. A password hashed using MD5 and salt is, for all practical purposes, just as secure as if it were hashed with SHA256 and salt. Nevertheless, it is a good idea to use a more secure hash function like SHA256, SHA512, RipeMD, or WHIRLPOOL if possible.

The RIGHT Way: How to Hash Properly

This section describes exactly how passwords should be hashed. The first subsection covers the basics—everything that is absolutely necessary. The following subsections explain how the basics can be augmented to make the hashes even harder to crack.

The Basics: Hashing with Salt

Warning: Do not just read this section. You absolutely must implement the stuff in the next section: “Making Password Cracking Harder: Slow Hash Functions”.

We’ve seen how malicious hackers can crack plain hashes very quickly using lookup tables and rainbow tables. We’ve learned that randomizing the hashing using salt is the solution to the problem. But how do we generate the salt, and how do we apply it to the password?

Salt should be generated using a Cryptographically Secure Pseudo-Random Number Generator (CSPRNG). CSPRNGs are very different than ordinary pseudo-random number generators, like the “C” language’s rand() function. As the name suggests, CSPRNGs are designed to be cryptographically secure, meaning they provide a high level of randomness and are completely unpredictable. We don’t want our salts to be predictable, so we must use a CSPRNG. The following table lists some CSPRNGs that exist for some popular programming platforms.

Platform CSPRNG
PHP mcrypt_create_ivopenssl_random_pseudo_bytes
Java java.security.SecureRandom
Dot NET (C#, VB) System.Security.Cryptography.RNGCryptoServiceProvider
Ruby SecureRandom
Python os.urandom
Perl Math::Random::Secure
C/C++ (Windows API) CryptGenRandom
Any language on GNU/Linux or Unix Read from /dev/random or /dev/urandom

The salt needs to be unique per-user per-password. Every time a user creates an account or changes their password, the password should be hashed using a new random salt. Never reuse a salt. The salt also needs to be long, so that there are many possible salts. As a rule of thumb, make your salt is at least as long as the hash function’s output. The salt should be stored in the user account table alongside the hash.

To Store a Password

  1. Generate a long random salt using a CSPRNG.
  2. Prepend the salt to the password and hash it with a standard password hashing function like Argon2, bcrypt, scrypt, or PBKDF2.
  3. Save both the salt and the hash in the user’s database record.

To Validate a Password

  1. Retrieve the user’s salt and hash from the database.
  2. Prepend the salt to the given password and hash it using the same hash function.
  3. Compare the hash of the given password with the hash from the database. If they match, the password is correct. Otherwise, the password is incorrect.

In a Web Application, always hash on the server

If you are writing a web application, you might wonder where to hash. Should the password be hashed in the user’s browser with JavaScript, or should it be sent to the server “in the clear” and hashed there?

Even if you are hashing the user’s passwords in JavaScript, you still have to hash the hashes on the server. Consider a website that hashes users’ passwords in the user’s browser without hashing the hashes on the server. To authenticate a user, this website will accept a hash from the browser and check if that hash exactly matches the one in the database. This seems more secure than just hashing on the server, since the users’ passwords are never sent to the server, but it’s not.

The problem is that the client-side hash logically becomes the user’s password. All the user needs to do to authenticate is tell the server the hash of their password. If a bad guy got a user’s hash they could use it to authenticate to the server, without knowing the user’s password! So, if the bad guy somehow steals the database of hashes from this hypothetical website, they’ll have immediate access to everyone’s accounts without having to guess any passwords.

This isn’t to say that you shouldn’t hash in the browser, but if you do, you absolutely have to hash on the server too. Hashing in the browser is certainly a good idea, but consider the following points for your implementation:

  • Client-side password hashing is not a substitute for HTTPS (SSL/TLS). If the connection between the browser and the server is insecure, a man-in-the-middle can modify the JavaScript code as it is downloaded to remove the hashing functionality and get the user’s password.
  • Some web browsers don’t support JavaScript, and some users disable JavaScript in their browser. So for maximum compatibility, your app should detect whether or not the browser supports JavaScript and emulate the client-side hash on the server if it doesn’t.
  • You need to salt the client-side hashes too. The obvious solution is to make the client-side script ask the server for the user’s salt. Don’t do that, because it lets the bad guys check if a username is valid without knowing the password. Since you’re hashing and salting (with a good salt) on the server too, it’s OK to use the username (or email) concatenated with a site-specific string (e.g. domain name) as the client-side salt.

Making Password Cracking Harder: Slow Hash Functions

Salt ensures that attackers can’t use specialized attacks like lookup tables and rainbow tables to crack large collections of hashes quickly, but it doesn’t prevent them from running dictionary or brute-force attacks on each hash individually. High-end graphics cards (GPUs) and custom hardware can compute billions of hashes per second, so these attacks are still very effective. To make these attacks less effective, we can use a technique known as key stretching.

The idea is to make the hash function very slow, so that even with a fast GPU or custom hardware, dictionary and brute-force attacks are too slow to be worthwhile. The goal is to make the hash function slow enough to impede attacks, but still fast enough to not cause a noticeable delay for the user.

Key stretching is implemented using a special type of CPU-intensive hash function. Don’t try to invent your own–simply iteratively hashing the hash of the password isn’t enough as it can be parallelized in hardware and executed as fast as a normal hash. Use a standard algorithm like PBKDF2 or bcrypt. You can find a PHP implementation of PBKDF2 here.

These algorithms take a security factor or iteration count as an argument. This value determines how slow the hash function will be. For desktop software or smartphone apps, the best way to choose this parameter is to run a short benchmark on the device to find the value that makes the hash take about half a second. This way, your program can be as secure as possible without affecting the user experience.

If you use a key stretching hash in a web application, be aware that you will need extra computational resources to process large volumes of authentication requests, and that key stretching may make it easier to run a Denial of Service (DoS) attack on your website. I still recommend using key stretching, but with a lower iteration count. You should calculate the iteration count based on your computational resources and the expected maximum authentication request rate. The denial of service threat can be eliminated by making the user solve a CAPTCHA every time they log in. Always design your system so that the iteration count can be increased or decreased in the future.

If you are worried about the computational burden, but still want to use key stretching in a web application, consider running the key stretching algorithm in the user’s browser with JavaScript. The Stanford JavaScript Crypto Library includes PBKDF2. The iteration count should be set low enough that the system is usable with slower clients like mobile devices, and the system should fall back to server-side computation if the user’s browser doesn’t support JavaScript. Client-side key stretching does not remove the need for server-side hashing. You must hash the hash generated by the client the same way you would hash a normal password.

Impossible-to-crack Hashes: Keyed Hashes and Password Hashing Hardware

As long as an attacker can use a hash to check whether a password guess is right or wrong, they can run a dictionary or brute-force attack on the hash. The next step is to add a secret key to the hash so that only someone who knows the key can use the hash to validate a password. This can be accomplished two ways. Either the hash can be encrypted using a cipher like AES, or the secret key can be included in the hash using a keyed hash algorithm like HMAC.

This is not as easy as it sounds. The key has to be kept secret from an attacker even in the event of a breach. If an attacker gains full access to the system, they’ll be able to steal the key no matter where it is stored. The key must be stored in an external system, such as a physically separate server dedicated to password validation, or a special hardware device attached to the server such as the YubiHSM.

I highly recommend this approach for any large scale (more than 100,000 users) service. I consider it necessary for any service hosting more than 1,000,000 user accounts.

If you can’t afford multiple dedicated servers or special hardware devices, you can still get some of the benefits of keyed hashes on a standard web server. Most databases are breached using SQL Injection Attacks, which, in most cases, don’t give attackers access to the local filesystem (disable local filesystem access in your SQL server if it has this feature). If you generate a random key and store it in a file that isn’t accessible from the web, and include it into the salted hashes, then the hashes won’t be vulnerable if your database is breached using a simple SQL injection attack. Don’t hard-code a key into the source code, generate it randomly when the application is installed. This isn’t as secure as using a separate system to do the password hashing, because if there are SQL injection vulnerabilities in a web application, there are probably other types, such as Local File Inclusion, that an attacker could use to read the secret key file. But, it’s better than nothing.

Please note that keyed hashes do not remove the need for salt. Clever attackers will eventually find ways to compromise the keys, so it is important that hashes are still protected by salt and key stretching.

Other Security Measures

Password hashing protects passwords in the event of a security breach. It does not make the application as a whole more secure. Much more must be done to prevent the password hashes (and other user data) from being stolen in the first place.

Even experienced developers must be educated in security in order to write secure applications. A great resource for learning about web application vulnerabilities is The Open Web Application Security Project (OWASP). A good introduction is the OWASP Top Ten Vulnerability List. Unless you understand all the vulnerabilities on the list, do not attempt to write a web application that deals with sensitive data. It is the employer’s responsibility to ensure all developers are adequately trained in secure application development.

Having a third party “penetration test” your application is a good idea. Even the best programmers make mistakes, so it always makes sense to have a security expert review the code for potential vulnerabilities. Find a trustworthy organization (or hire staff) to review your code on a regular basis. The security review process should begin early in an application’s life and continue throughout its development.

It is also important to monitor your website to detect a breach if one does occur. I recommend hiring at least one person whose full time job is detecting and responding to security breaches. If a breach goes undetected, the attacker can make your website infect visitors with malware, so it is extremely important that breaches are detected and responded to promptly.

Frequently Asked Questions

What hash algorithm should I use?

DO use:

DO NOT use:

  • Fast cryptographic hash functions such as MD5, SHA1, SHA256, SHA512, RipeMD, WHIRLPOOL, SHA3, etc.
  • Insecure versions of crypt ($1$, $2$, $2x$, $3$).
  • Any algorithm that you designed yourself. Only use technology that is in the public domain and has been well-tested by experienced cryptographers.

Even though there are no cryptographic attacks on MD5 or SHA1 that make their hashes easier to crack, they are old and are widely considered (somewhat incorrectly) to be inadequate for password storage. So I don’t recommend using them. An exception to this rule is PBKDF2, which is frequently implemented using SHA1 as the underlying hash function.

How should I allow users to reset their password when they forget it?

It is my personal opinion that all password reset mechanisms in widespread use today are insecure. If you have high security requirements, such as an encryption service would, do not let the user reset their password.

Most websites use an email loop to authenticate users who have forgotten their password. To do this, generate a random single-use token that is strongly tied to the account. Include it in a password reset link sent to the user’s email address. When the user clicks a password reset link containing a valid token, prompt them for a new password. Be sure that the token is strongly tied to the user account so that an attacker can’t use a token sent to his own email address to reset a different user’s password.

The token must be set to expire in 15 minutes or after it is used, whichever comes first. It is also a good idea to expire any existing password tokens when the user logs in (they remembered their password) or requests another reset token. If a token doesn’t expire, it can be forever used to break into the user’s account. Email (SMTP) is a plain-text protocol, and there may be malicious routers on the internet recording email traffic. And, a user’s email account (including the reset link) may be compromised long after their password has been changed. Making the token expire as soon as possible reduces the user’s exposure to these attacks.

Attackers will be able to modify the tokens, so don’t store the user account information or timeout information in them. They should be an unpredictable random binary blob used only to identify a record in a database table.

Never send the user a new password over email. Remember to pick a new random salt when the user resets their password. Don’t re-use the one that was used to hash their old password.

What should I do if my user account database gets leaked/hacked?

Your first priority is to determine how the system was compromised and patch the vulnerability the attacker used to get in. If you do not have experience responding to breaches, I highly recommend hiring a third-party security firm.

It may be tempting to cover up the breach and hope nobody notices. However, trying to cover up a breach makes you look worse, because you’re putting your users at further risk by not informing them that their passwords and other personal information may be compromised. You must inform your users as soon as possible—even if you don’t yet fully understand what happened. Put a notice on the front page of your website that links to a page with more detailed information, and send a notice to each user by email if possible.

Explain to your users exactly how their passwords were protected—hopefully hashed with salt—and that even though they were protected with a salted hash, a malicious hacker can still run dictionary and brute force attacks on the hashes. Malicious hackers will use any passwords they find to try to login to a user’s account on a different website, hoping they used the same password on both websites. Inform your users of this risk and recommend that they change their password on any website or service where they used a similar password. Force them to change their password for your service the next time they log in. Most users will try to “change” their password to the original password to get around the forced change quickly. Use the current password hash to ensure that they cannot do this.

It is likely, even with salted slow hashes, that an attacker will be able to crack some of the weak passwords very quickly. To reduce the attacker’s window of opportunity to use these passwords, you should require, in addition to the current password, an email loop for authentication until the user has changed their password. See the previous question, “How should I allow users to reset their password when they forget it?” for tips on implementing email loop authentication.

Also tell your users what kind of personal information was stored on the website. If your database includes credit card numbers, you should instruct your users to look over their recent and future bills closely and cancel their credit card.

What should my password policy be? Should I enforce strong passwords?

If your service doesn’t have strict security requirements, then don’t limit your users. I recommend showing users information about the strength of their password as they type it, letting them decide how secure they want their password to be. If you have special security needs, enforce a minimum length of 12 characters and require at least two letters, two digits, and two symbols.

Do not force your users to change their password more often than once every six months, as doing so creates “user fatigue” and makes users less likely to choose good passwords. Instead, train users to change their password whenever they feel it has been compromised, and to never tell their password to anyone. If it is a business setting, encourage employees to use paid time to memorize and practice their password.

If an attacker has access to my database, can’t they just replace the hash of my password with their own hash and login?

Yes, but if someone has accesss to your database, they probably already have access to everything on your server, so they wouldn’t need to login to your account to get what they want. The purpose of password hashing (in the context of a website) is not to protect the website from being breached, but to protect the passwords if a breach does occur.

You can prevent hashes from being replaced during a SQL injection attack by connecting to the database with two users with different permissions. One for the ‘create account’ code and one for the ‘login’ code. The ‘create account’ code should be able to read and write to the user table, but the ‘login’ code should only be able to read.

Why do I have to use a special algorithm like HMAC? Why can’t I just append the password to the secret key?

Hash functions like MD5, SHA1, and SHA2 use the Merkle–Damgård construction, which makes them vulnerable to what are known as length extension attacks. This means that given a hash H(X), an attacker can find the value of H(pad(X) + Y), for any other string Y, without knowing X. pad(X) is the padding function used by the hash.

This means that given a hash H(key + message), an attacker can compute H(pad(key + message) + extension), without knowing the key. If the hash was being used as a message authentication code, using the key to prevent an attacker from being able to modify the message and replace it with a different valid hash, the system has failed, since the attacker now has a valid hash of message + extension.

It is not clear how an attacker could use this attack to crack a password hash quicker. However, because of the attack, it is considered bad practice to use a plain hash function for keyed hashing. A clever cryptographer may one day come up with a clever way to use these attacks to make cracking faster, so use HMAC.

Should the salt come before or after the password?

It doesn’t matter, but pick one and stick with it for interoperability’s sake. Having the salt come before the password seems to be more common.

Why does the hashing code on this page compare the hashes in “length-constant” time?

Comparing the hashes in “length-constant” time ensures that an attacker cannot extract the hash of a password in an on-line system using a timing attack, then crack it off-line.

The standard way to check if two sequences of bytes (strings) are the same is to compare the first byte, then the second, then the third, and so on. As soon as you find a byte that isn’t the same for both strings, you know they are different and can return a negative response immediately. If you make it through both strings without finding any bytes that differ, you know the strings are the same and can return a positive result. This means that comparing two strings can take a different amount of time depending on how much of the strings match.

For example, a standard comparison of the strings “xyzabc” and “abcxyz” would immediately see that the first character is different and wouldn’t bother to check the rest of the string. On the other hand, when the strings “aaaaaaaaaaB” and “aaaaaaaaaaZ” are compared, the comparison algorithm scans through the block of “a” before it determines the strings are unequal.

Suppose an attacker wants to break into an on-line system that rate limits authentication attempts to one attempt per second. Also suppose the attacker knows all of the parameters to the password hash (salt, hash type, etc), except for the hash and (obviously) the password. If the attacker can get a precise measurement of how long it takes the on-line system to compare the hash of the real password with the hash of a password the attacker provides, he can use the timing attack to extract part of the hash and crack it using an offline attack, bypassing the system’s rate limiting.

First, the attacker finds 256 strings whose hashes begin with every possible byte. He sends each string to the on-line system, recording the amount of time it takes the system to respond. The string that takes the longest will be the one whose hash’s first byte matches the real hash’s first byte. The attacker now knows the first byte, and can continue the attack in a similar manner on the second byte, then the third, and so on. Once the attacker knows enough of the hash, he can use his own hardware to crack it, without being rate limited by the system.

It might seem like it would be impossible to run a timing attack over a network. However, it has been done, and has been shown to be practical. That’s why the code on this page compares strings in a way that takes the same amount of time no matter how much of the strings match.

How does the SlowEquals code work?

The previous question explains why SlowEquals is necessary, this one explains how the code actually works.

1.     private static boolean slowEquals(byte[] a, byte[] b)
2.     {
3.         int diff = a.length ^ b.length;
4.         for(int i = 0; i < a.length && i < b.length; i++)
5.             diff |= a[i] ^ b[i];
6.         return diff == 0;
7.     }

The code uses the XOR “^” operator to compare integers for equality, instead of the “==” operator. The reason why is explained below. The result of XORing two integers will be zero if and only if they are exactly the same. This is because 0 XOR 0 = 0, 1 XOR 1 = 0, 0 XOR 1 = 1, 1 XOR 0 = 1. If we apply that to all the bits in both integers, the result will be zero only if all the bits matched.

So, in the first line, if a.length is equal to b.length, the diff variable will get a zero value, but if not, it will get some non-zero value. Next, we compare the bytes using XOR, and OR the result into diff. This will set diff to a non-zero value if the bytes differ. Because ORing never un-sets bits, the only way diff will be zero at the end of the loop is if it was zero before the loop began (a.length == b.length) and all of the bytes in the two arrays match (none of the XORs resulted in a non-zero value).

The reason we need to use XOR instead of the “==” operator to compare integers is that “==” is usually translated/compiled/interpreted as a branch. For example, the C code “diff &= a == b” might compile to the following x86 assembly:

MOV EAX, [A]
CMP [B], EAX
JZ equal
JMP done
equal:
AND [VALID], 1
done:
AND [VALID], 0

The branching makes the code execute in a different amount of time depending on the equality of the integers and the CPU’s internal branch prediction state.

The C code “diff |= a ^ b” should compile to something like the following, whose execution time does not depend on the equality of the integers:

MOV EAX, [A]
XOR EAX, [B]
OR [DIFF], EAX

Why bother hashing?

Your users are entering their password into your website. They are trusting you with their security. If your database gets hacked, and your users’ passwords are unprotected, then malicious hackers can use those passwords to compromise your users’ accounts on other websites and services (most people use the same password everywhere). It’s not just your security that’s at risk, it’s your users’. You are responsible for your users’ security.

Source: Defuse Security

Hits: 637

Let’s Encrypt Wildcard SSL Certificate using CERTBOT

What is a Wildcard Certificate?

In computer networking, a wildcard certificate is a public key certificate which can be used with multiple subdomains of a domain. The principal use is for securing web sites with HTTPS, but there are also applications in many other fields. Compared with conventional certificates, a wildcard certificate can be cheaper and more convenient than a certificate for each subdomain.

Example:

A single wildcard certificate for https://*.secops.in will secure all these subdomains on the secops.in domain:

  • www.secops.in
  • amisafe.secops.in
  • login.secops.in

Instead of getting separate certificates for subdomains, you can use a single certificate for all main domains and subdomains and reduce cost.

Because the wildcard only covers one level of subdomains (the asterisk doesn’t match full stops), these domains would not be valid for the certificate:

  • test.login.secops.in

The “naked” domain is valid when added separately as a Subject Alternative Name (SubjectAltName):

  • secops.in

Who is LetsEncrypt!

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG).

Let’s Encrypt gives people the digital certificates they need in order to enable HTTPS (SSL/TLS) for websites, for free, in the most user-friendly way we can. We do this because we want to create a more secure and privacy-respecting Web.

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.

More detailed information about how the Let’s Encrypt CA works.

What is Certbot?

Certbot is an easy-to-use automatic client that fetches and deploys SSL/TLS certificates for your webserver. Certbot was developed by EFF and others as a client for Let’s Encrypt and was previously known as “the official Let’s Encrypt client” or “the Let’s Encrypt Python client.” Certbot will also work with any other CAs that support the ACME protocol.

While there are many other clients that implement the ACME protocol to fetch certificates, Certbot is the most extensive client and can automatically configure your webserver to start serving over HTTPS immediately. For Apache, it can also optionally automate security tasks such as tuning ciphersuites and enabling important security features such as HTTP → HTTPS redirects, OCSP stapling, HSTS, and upgrade-insecure-requests.

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship.

Certbot is the work of many authors, including a team of EFF staff and numerous open source contributors.

The Certbot privacy policy is described here.

Steps to generate Free Let’s Encrypt Wildcard SSL Certificate

Step#1: Install latest Certbot

$ wget https://dl.eff.org/certbot-auto
$ chmod a+x ./certbot-auto
$ sudo mv certbot-auto /usr/bin/certbot

Proceed to Step#2

Step#2: Generate the wildcard certificate with DNS Challenge (Eg. Domain: *.secops.in)

$ sudo certbot certonly \
--server https://acme-v02.api.letsencrypt.org/directory \
--manual --preferred-challenges dns \
-d *.secops.in -d secops.in

An important parameter to notice is --server https://acme-v02.api.letsencrypt.org/directory, which will instruct the Certbot client to use v2 of the Let’s Encrypt API (we need that for wildcard certs). Also notice 2 domains, one is a wildcard and the second one is for TLD(Top Level Domain) as the wildcard does not cover TLD’s or records in subdomains (explained in the first section)

The Certbot client will walk you through the process of registering an account, and it will instruct you on what to do to complete the challenges.

Proceed to Step#3

Step#3: Create a DNS TXT Record as instructed in the “secops.in” DNS Zone File

-------------------------------------------------------------------------------
Please deploy a DNS TXT record under the name
_acme-challenge.secops.in with the following value:
 
02HdxCLqTbjvjtO7mnLV1XXXXXXExamplEONlyabC
 
Before continuing, verify the record is deployed.
-------------------------------------------------------------------------------
Press Enter to Continue

Proceed to Step#4 for results

Step#4: Successful or Unsuccessful Messages

A Successful message would look like:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
 /etc/letsencrypt/live/secops.in/fullchain.pem
 Your key file has been saved at:
 /etc/letsencrypt/live/secops.in/privkey.pem
 Your cert will expire on 2018-07-22. To obtain a new or tweaked
 version of this certificate in the future, simply run certbot
 again. To non-interactively renew *all* of your certificates, run
 "certbot renew"
 - If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
 Donating to EFF: https://eff.org/donate-le

Your certificate and chain have been saved at: /etc/letsencrypt/live/secops.in/fullchain.pem
Your key file has been saved at: /etc/letsencrypt/live/secops.in/privkey.pem

An Unsuccessful message would look like:

Failed authorization procedure. secops.in (dns-01): urn:ietf:params:acme:error:unauthorized :: 
The client lacks sufficient authorization :: 
Incorrect TXT record "02HdxCLqTbjvjtO7mnxxxxXXxXxXHGZM2LlNXCSgOTQTzlp51ARngrBadcOnFigYvtv6SOg-BadcOnFigLts37Q0" 
found at _acme-challenge.secops.in

IMPORTANT NOTES:

 - The following errors were reported by the server:
   Domain: secops.in
   Type:   unauthorized
   Detail: Incorrect TXT record
   "02HdxCLqTbjvjtO7mnxxxxXXxXxXHGZM2LlNXCSgOTQTzlp51ARngrBadcOnFigYvtv6SOg-BadcOnFigLts37Q0"
   found at _acme-challenge.secops.in
   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.

In such cases, please recheck your DNS TXT Record using DNS lookup tools like dig, nslookup etc. as shown below:

$ dig TXT _acme-challenge.secops.in

; <<>> DiG 9.10.6 <<>> TXT _acme-challenge.secops.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10301
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:
;_acme-challenge.secops.in. IN TXT

;; ANSWER SECTION:
_acme-challenge.secops.in. 225 IN TXT "ECtBiSVn-qIufdfzHLTTlWVx09mWAv8MbzSZGFBbkQc"

;; Query time: 33 msec
;; SERVER: 208.67.220.220#53(208.67.220.220)
;; WHEN: Mon Apr 23 14:04:44 IST 2018
;; MSG SIZE  rcvd: 154

From the output above, it is clear that the TXT Record setup is not showing up the correct value as provided in Step#3. Hence re-run the tool and always ensure that the TXT Record should be setup as suggested for a Successful SSL Wildcard certificate generation.

Sources: Let’s Encrypt, Certbot

Hits: 150

Secure your API – Best Practices

Why API’s need special attention?

As an increasing number of organizations provide API access to make their information available to a wider audience, securing that access is likewise of increasing importance. With the growing adoption of cloud, mobile, and hybrid environments the risks are increasing. Cyber threats and DDoS attacks are targeting enterprise applications as back-end systems become more accessible. In such situations, the API can be a significant point of vulnerability with its ability to offer programmatic access to external developers.

The previous wave of security breaches occurred when organizations opened access to their web applications. Defences, such as web application firewalls (WAF), were introduced to mitigate those security breaches. However, these defences are not effective against all API attacks, and you’ll need to focus on security of your API interfaces.

API Security

The predominant API interface is the REST API, which is based on HTTP protocol, and generally JSON formatted responses. Securing your API interfaces has much in common with web access security, but present additional challenges due to:

  • Exposure to a wider range of data
  • Direct access to the back-end server
  • Ability to download large volumes of data
  • Different usage patterns

This topic has been covered in several sites such as OWASP REST Security, and we will summarise the main challenges and defences for API security.

Authentication

The first defense is to identify the user, human or application, and determine that the user has permission to invoke your API. We are all familiar with entering a user ID and password, and possibly an additional identifier, to access a web interface. Corresponding for API, there is an API ID and an API key that are specified on the request to authenticate the user. The API key is a unique string generated for the application for each user of the API.

Access Control

Not all authenticated users will necessarily be authorized to access all provided APIs. For example, some users require access to retrieve (GET) information, but should not be able to change (PUT) any information. Using an access control framework, such as OAuth, you control the list of APIs that each specific API key can access.

To prevent a massive amount of API requests that can cause a DDoS attack or other misuse of the API service, apply a limit to the number of requests in a given time interval for each API. When the rate is exceeded, block access from the API key at least temporarily, and return the 429 (too many requests) HTTP error code.

Encryption

To protect the request in transit, require HTTPS for all access so that the messages are secured and encoded with TLS.

To protect the JSON formatted response, consider employing the JSON Web Token (JWT) standard. As stated in that site,

JWT is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Confidentiality

As a best practice, we do not want to expose any more information than is required. For that reason, be careful that error messages do not reveal too much information.

Application Layer Attacks

Some actors can target your system after they circumvent your secured access and data interface. For example, they could obtain authorization credentials via phishing. You’ll need to validate all data to prevent application layer attacks, such as:

  • Cross-Site Scripting – Malicious scripts are injected into one of the request parameters
  • Code Injection – Inject valid code to services, such as SQL (SQL injection) or XQuery, to open the interface to user control
  • Business Logic Exploits –Allows the attacker to circumvent the business rules
  • Parameter Pollution Attacks – Exploit the data sent in the API request by modifying the parameters of the API request

Apply strict INPUT VALIDATION as you would on any interface, including:

  • Restrict, where possible, parameter values to a whitelist of expected values
  • To facilitate a whitelist, have strong typing of input value
  • Validate posted structures data against a formal schema language, to restrict the content and structure
  • Blacklist risky content, such as SQL schema manipulation statements

Log all API access. This is essential to assist resolving any issues. From these logs you can easily monitor activity and potentially discover any patterns or excessive usage activity.

Security Framework

Consider using an existing API framework that has many of the security features built in. If you must develop your own, separate out the security portion from the application portion. In this way, security is uniformly built in and developers can focus on the application logic.

After all that don’t neglect to allocate resources to test security of the APIs. Make sure to test all the defences mentioned in this article.

 

Keep Defending !

Hits: 143

Compile NGINX with NAXSI – Part#1

Why Re-Invent the Wheel ? ? ? ?

In this tutorial/walkthrough, I shall be providing you detailed instructions on how to compile and configure NAXSI on NGINX on Ubuntu 14.04 as the ubuntu standard repos have a very old version of NAXSI built NGINX  which I have personally found to be very buggy !


Schedule:

Part#1: Installation and basic configuration of NGINX-NAXSI

Part#2: Pumping the NGINX and NAXSI logs to ELASTICSEARCH

Part#3: Analyzing the logs and automated process of generating the false-positives and exclusions

Part#4: Conclusion


Requirements:

  • Ubuntu 14.04
  • GIT tools installed and setup on the server

Rest of the dependencies shall be provided below.


Step#1: Download Config Files

Ensure that you have the necessary tools handy that have been preconfigured to work with Ubuntu 14.04 from my GIT link: https://github.com/aarvee11/nginx_1.11.6-naxsi_latest

cd /tmp
git clone https://github.com/aarvee11/nginx_1.11.6-naxsi_latest.git

Step#2: Install Dependencies

As NAXSI and NGINX are being compiled from source, we will have to setup our server manually by installing all the dependencies below:

apt-get update
apt-get install automake gcc make pkg-config libtool g++ libfl-dev bison build-essential libbison-dev libyajl-dev liblmdb-dev libpcre3-dev libcurl4-openssl-dev libgeoip-dev libxml2-dev libyajl2 libxslt-dev openssl libssl-dev libperl-dev libgd2-xpm-dev

Step#3: Download and Setup

Run the following commands as given below. Ensure that the necessary permissions are given.

I have a habit of playing risky by running the commands as “sudo su” but it’s not really safe to play that risky on a production machine. Please follow your best standards to get things running on the server !

cd /usr/src 
wget https://github.com/nbs-system/naxsi/archive/master.zip 
wget http://nginx.org/download/nginx-1.11.6.tar.gz 
unzip master.zip 
tar -zxvf nginx-1.11.6.tar.gz 
git clone https://github.com/openresty/headers-more-nginx-module.git 
git clone https://github.com/flant/nginx-http-rdns.git 
cd /usr/src/nginx-1.11.6/ 
./configure --prefix=/etc/nginx \
 --add-module=/usr/src/naxsi/naxsi_src/ \
 --add-module=/usr/src/headers-more-nginx-module \
 --add-module=/usr/src/nginx-http-rdns/ \
 --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' \
 --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' \
 --http-log-path=/var/log/nginx/access.log \
 --error-log-path=/var/log/nginx/error.log \
 --with-debug --with-pcre-jit --with-ipv6 \
 --with-http_ssl_module --with-http_stub_status_module \
 --with-http_realip_module \
 --with-http_addition_module \
 --with-http_dav_module \
 --with-http_geoip_module \
 --with-http_gzip_static_module \ --with-http_image_filter_module \
 --with-http_sub_module \
 --with-http_xslt_module \
 --with-mail \
 --with-mail_ssl_module \
 --http-client-body-temp-path=/tmp/client_body_temp \
 --http-proxy-temp-path=/tmp/proxy_temp \
 --http-fastcgi-temp-path=/tmp/fastcgi_temp \
 --http-uwsgi-temp-path=/tmp/uwsgi_temp \
 --http-scgi-temp-path=/tmp/scgi_temp
make
make install

Step#4: Create/Copy Config files

Using the pre-configured steps provided in Step#1, we shall be now copying the files over to the correct locations as shown below:

cp /tmp/nginx_1.11.6-naxsi_latest/etc/init.d/nginx /etc/init.d/nginx
cp -r /tmp/nginx_1.11.6-naxsi_latest/nginx/conf/* /etc/nginx/conf/
ln -s /etc/nginx/sbin/nginx /usr/sbin/nginx
mkdir /etc/nginx/conf/sites-available
mkdir /etc/nginx/conf/sites-enabled
cp /usr/src/naxsi-master/naxsi_config/naxsi_core.rules /etc/nginx/conf/
mkdir /etc/nginx/conf/naxsi-whitelist/
touch /etc/nginx/conf/whitelist.conf

 Step#5: Configure your website on NGINX-NAXSI

You can use the sample configuration that can be found under “/tmp/nginx_1.11.6-naxsi_latest/nginx/sites-available” directory that has already been copied in the above step.

Edit the config file to match your requirements such as Site Name, Upstream IP/ Server etc. The sample has been provided below for quick reference:

server {
  listen 80 default_server;
  #listen [::]:80 default_server ipv6only=on;
  #root /var/www/nginx/html;
  #index index.html index.htm;
  # Make site accessible from http://localhost/
  server_name *.example.com; # Replace it with your website hostname. * is wildcard.
  set $naxsi_extensive_log 1;
  location / {
    # Uncomment to enable naxsi on this location
    include /etc/nginx/conf/naxsi.rules;
    include /etc/nginx/conf/naxsi-whitelist/*.rules;
    #try_files $uri $uri/ @rewrite;
    proxy_pass http://127.0.1.80:8000;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Connection close;
    proxy_set_header X-Real-IP $remote_addr;
    # Comment the below line if there is already an upstream reverse proxy server that is setting the actual client IP
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Once the configuration is complete, run the command below to create a symlink of the config file in sites-enabled directory so that NGINX can be pick it up

cd /etc/nginx/conf/sites-enabled
ls -s ../sites-available/<virtual-host-config-file> .

Conclusion:

With all the above steps, we are now ready to deploy our Web Application in a Alert-Only mode which start scanning our incoming web requests and starts generating events that trigger a lot of events.

In the upcoming second part, I shall be providing detailed steps on how to setup the logging for NGINX and NAXSI using Elasticsearch.


As always I say:

Keep Defending !

Hits: 472

The Fight with Fake Facebook Bots !

FacebookIPLIsts

Daily Checked and Updated if Facebook modifies their list.

Am sure most of the web admins are tired with the FAKE Facebook Bots that set their User-Agents manually to facebook bots impersonating it and keep scraping the data

Hence here are the Lists for  IPv4 and IPv6 addresses in order to whitelist the real Facebook IP Addresses as per Facebook’s Documentation.

Please note that the list shall be checked and updated daily at 12:00PM IST (GMT + 5:30) if there is a change in the IP Lists, else you shall see how old the IP Addresses have not been changed from Facebook starting from the date of 1st check in on this Repo.

Link: GitHub Repo

Hits: 111

Secure SDLC

Introduction:

SDLC stands for Software Development LifeCycle.

WHY do we need to Secure it when it is already a global standard across any development cycle ? ? ?

Remember the quote?

A stitch in time, saves nine ! ! !

S-SDLC is a preventive process to avoid any security mishaps once a product is out there in the wild. The GOAL is to keep the application SAFE from breaches/attacks/hacks. Fixing a security vulnerability in a product which is already released is always a costly affair in terms of Brand, Revenue & Time!

Phase#1: SCOPING:

Gather as much information as possible in this phase regarding the feature in question. This helps us understand the business functionality much better so that no security measure in place shall affect the Business Requirements. Ultimately its our responsibility to deliver what Business wants. Do involve the key people in this phase like the Product Manager, Senior Management or even Application Developers

Phase#2: DESIGN:

In the Design phase, it is our responsibility to provide a flow which is secure by design. Think of the logical flow based attacks which purely attack the core logic of the feature. Ensure that sufficient checks can be placed at multiple places and the source of truth in core logics should not completely rely on the client side logic instead has some validation in place. For example, in a checkout flow for an e-commerce application, ensure that the client side cart value is not taken as the source of truth instead always keep that as a validation step but the source of truth should be from the merchant’s server for a payment gateway!

Phase#3: CODING:

Ensure that the best coding standards and practices are followed. As well perform static analysis of the code should be done at this phase and not at the next one so that the developers are not going through multiple cycles. For example, ensure that no passwords are being hardcoded in the code, instead define a strategy to pickup sensitive keys in runtime while building your application at the time of deployment and is always a part of the runtime memory. So even if a breach happens on the application server, the credentials/ keys are safe. Ensure that proper sanitizing wrappers are in place and there is always a service layer between your frontend application and critical infrastructure like Database which sanitizes/parametrizes the query inputs before executing them on the database server.

Please note that developer’s time is precious and this phase shall ensure that their time is not getting wasted in multiple redundant cycles!

Phase#4: TESTING:

The test cases for Security Testing should be pre-determined so that the RED Team once receives the build after QA Sanity, should start off with manual testing against the OWASP Top 10 vulnerabilities which ever relates to the feature. Ensure that there are proper regression tests also in place if it is an enhancement to an existing feature. Logical Flow testing is one of the most important tests in this phase and ensure that the flow developed is 100% complying to the flow proposed in Phase#2: Design. A proper VAPT should be done end-to-end and the results should be recorded in your bug/task tracking tools like JIRA for future regression testing references.

Always prioritize the Security Bugs w.r.t. the impact expected out of the issue identified the right way and ensure that the impact description is well understood by the stakeholders! Of course the bug bounty platforms have helped the bounty hunters and security researchers practice this step by providing a proper flow on their platforms. Try raising a Security Bug in BugCrowd!

If bugs are found, then get them fixed in a timely manner and DO ENSURE THAT YOU HAVE A SOLUTION for the bug raised beforehand because ultimately you are the who found the security bug and hence propose a secure solution regarding the same. For best practices and evasion techniques, please refer to the OWASP Top 10 Cheat Sheet and refer the same as a good standard while addressing vulnerabilities and requesting a fix.

Phase#5: DEPLOY:

This is the last phase in which we need to ensure that the environment in which the feature is getting deployed is secured. Ensure that the server has been hardened and the version being used is vulnerability free! As well do perform network assessment where and when required and try to avoid non-standard ports. Ensure that proper TLS Implementation is in place. A Security Architect with the right DevOps skills can help you in achieving the best results.

Once we are able to implement the above process, we can ensure that our applications that are getting deployed are well protected right from design itself. But remember:

Security is a Process, not a Product!

Ensure that you have sufficient monitoring and alerts in place. A proper Incident Response Team is well trained on the new feature and how to identify security incidents and attack attempts and how to identify them. Design a process on how to handle live attacks and ensure that there are sufficient tools and visibility provided to the monitoring teams like SOC

Keep Defending !

Hits: 153