2.0 Configuration and Deployment Management Testing

  • 2.1 Test Network Infrastructure Configuration

    Known Server Vulnerabilities (CVEs, ...) - automation -

    Administrative Tools (default username and password, )

  • 2.2 Test Application Platform Configuration

    • Black-Box Testing

    • Gray-Box Testing

      • Configuration Review

        • Only enable server modules (ISAPI extensions in the case of IIS) that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software from affecting the site if they are only present in modules that have been already disabled.

        • Handle server errors (40x or 50x) with custom-made pages instead of with the default web server pages. Specifically make sure that any application errors will not be returned to the end user and that no code is leaked through these errors since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.

        • Make sure that the server software runs with minimized privileges in the operating system. This prevents an error in the server software from directly compromising the whole system, although an attacker could elevate privileges once running code as the web server.

        • Make sure the server software properly logs both legitimate access and errors.

        • Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance-tuned properly.

        • Never grant non-administrative identities (with the exception of NT SERVICE\\WMSvc) access to applicationHost.config, redirection.config, and administration.config (either Read or Write access). This includes Network Service, IIS_IUSRS, IUSR, or any custom identity used by IIS application pools. IIS worker processes are not meant to access any of these files directly.

        • Never share out applicationHost.config, redirection.config, and administration.config on the network. When using Shared Configuration, prefer to export applicationHost.config to another location (see the section titled “Setting Permissions for Shared Configuration).

        • Keep in mind that all users can read .NET Framework machine.config and root web.config files by default. Do not store sensitive information in these files if it should be for administrator eyes only.

        • Encrypt sensitive information that should be read by the IIS worker processes only and not by other users on the machine.

        • Do not grant Write access to the identity that the Web server uses to access the shared applicationHost.config. This identity should have only Read access.

        • Use a separate identity to publish applicationHost.config to the share. Do not use this identity for configuring access to the shared configuration on the Web servers.

        • Use a strong password when exporting the encryption keys for use with shared -configuration.

        • Maintain restricted access to the share containing the shared configuration and encryption keys. If this share is compromised, an attacker will be able to read and write any IIS configuration for your Web servers, redirect traffic from your Web site to malicious sources, and in some cases gain control of all web servers by loading arbitrary code into IIS worker processes.

        • Consider protecting this share with firewall rules and IPsec policies to allow only the member web servers to connect.

      • Logging

        In both cases (server and application logs) several issues should be tested and analyzed based on the log contents:

        1. Do the logs contain sensitive information?

        2. Are the logs stored in a dedicated server?

        3. Can log usage generate a Denial of Service condition?

        4. How are they rotated? Are logs kept for the sufficient time?

        5. How are logs reviewed? Can administrators use these reviews to detect targeted attacks?

        6. How are log backups preserved?

        7. Is the data being logged data validated (min/max length, chars etc) prior to being logged?

      • Sensitive Information in Logs

        This sensitive information can be misused by an attacker if they obtained the logs, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known server-status misconfiguration in Apache-based HTTP servers)

        list of sensitive information is:

        • Debug information

        • Stack traces

        • Usernames

        • System component names

        • Internal IP addresses

        • Less sensitive personal data (e.g. email addresses, postal addresses and telephone numbers associated with named individuals)

        • Business data

        • Application source code

        • Session identification values

        • Access tokens

        • Sensitive personal data and some forms of personally identifiable information (PII)

        • Authentication passwords

        • Database connection strings

        • Encryption keys

        • Bank account or payment card holder data

        • Data of a higher security classification than the logging system is allowed to store

        • Commercially-sensitive information

        • Information it is illegal to collect in the relevant jurisdiction

        • Information a user has opted out of collection, or not consented to e.g. use of do not track, or where consent to collect has expired

      • Log Storage

        Logs can introduce a Denial of Service condition if they are not properly stored

      • Log Rotation

        This feature should be tested in order to ensure that:

        • Logs are kept for the time defined in the security policy, not more and not less.

        • Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space).

        • File system permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs, which means that the permissions of the files can be changed upon rotation to prevent the web server process from modifying these.

          Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide his tracks.

      • Log Access Control

        Event log information should never be visible to end users. Even web administrators should not be able to see such logs since it breaks separation of duty controls. Ensure that any access control schema that is used to protect access to raw logs and any applications providing capabilities to view or search the logs is not linked with access control schemas for other application user roles. Neither should any log data be viewable by unauthenticated users.

      • Log Review

        In order to analyze web server attacks the error log files of the server need to be analyzed. Review should concentrate on:

        • 40x (not found) error messages. A large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server

        • 50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the back end database.

  • 2.3 Test File Extensions Handling for Sensitive Information

    • Forced Browsing

      • The following file extensions should never be returned by a web server, since they are related to files which may contain sensitive information or to files for which there is no reason to be served. (.asa, .inc, .config)

      • The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information.

        • .zip, .tar, .gz, .tgz, .rar, etc.: (Compressed) archive files

        • .java: No reason to provide access to Java source files

        • .txt: Text files

        • .pdf: PDF documents

        • .docx, .rtf, .xlsx, .pptx, etc.: Office documents

        • .bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files)

    • File Upload

      Windows 8.3 legacy file handling can sometimes be used to defeat file upload filters.

      Usage Examples:

      1. file.phtml gets processed as PHP code.

      2. FILE~1.PHT is served, but not processed by the PHP ISAPI handler.

      3. shell.phPWND can be uploaded.

      4. SHELL~1.PHP will be expanded and returned by the OS shell, then processed by the PHP ISAPI handler.

  • 2.4 Review Old Backup and Unreferenced Files for Sensitive Information

    login.asp → login.asp.old

    /.snapshot/

    • Black-Box Testing

      • Inference from the Naming Scheme Used for Published Content For example, if a page viewuser.asp is found, then look also for edituser.asp, adduser.asp and deleteuser.asp. If a directory /app/user is found, then look also for /app/admin and /app/manager.

      • Other Clues in Published Content HTML and JavaScript files (comments in html and dirs in js)

      • /robots.txt

      • Blind Guessing (dirbusting)

      • Information Obtained Through Server Vulnerabilities and Misconfiguration

        Numerous vulnerabilities have been found in individual web servers which allow an attacker to enumerate unreferenced content, for example:

        • Apache ?M=D directory listing vulnerability.

        • Various IIS script source disclosure vulnerabilities.

        • IIS WebDAV directory listing vulnerabilities.

      • Use of Publicly Available Information search for unlinked pages in archives (spidering new vs spidering cached)

      • Filename Filter Bypass

        Because deny list filters are based on regular expressions, one can sometimes take advantage of obscure OS filename expansion features in which work in ways the developer didn’t expect. The tester can sometimes exploit differences in ways that filenames are parsed by the application, web server, and underlying OS and it’s filename conventions.

        Example: Windows 8.3 filename expansion c:\\\\program files becomes C:\\\\PROGRA\\~1

        • Remove incompatible characters

        • Convert spaces to underscores

        • Take the first six characters of the basename

        • Add ~<digit> which is used to distinguish files with names using the same six initial characters

        • This convention changes after the first 3 cname ollisions

        • Truncate file extension to three characters

        • Make all the characters uppercase

    • Gray-Box Testing

      Performing gray box testing against old and backup files requires examining the files contained in the directories belonging to the set of web directories served by the web server(s) of the web application infrastructure. Theoretically the examination should be performed by hand to be thorough. However, since in most cases copies of files or backup files tend to be created by using the same naming conventions, the search can be easily scripted. For example, editors leave behind backup copies by naming them with a recognizable extension or ending and humans tend to leave behind files with a .old or similar predictable extensions. A good strategy is that of periodically scheduling a background job checking for files with extensions likely to identify them as copy or backup files, and performing manual checks as well on a longer time basis.

  • 2.5 Enumerate Infrastructure and Application Admin Interfaces

    • Black-Box Testing

      • Directory and file enumeration (/admin or /administrator etc..) - google dorks -

      • brute forcing of server contents

      • Comments and links in source code

      • Reviewing server and application documentation

      • Publicly available information. Many applications such as WordPress have default administrative interfaces .

      • Alternative server port. Administration interfaces may be seen on a different port on the host than the main application.

      • Parameter tampering or in a cookie <input type="hidden" name="admin" value="no"> Cookie: session_cookie; useradmin=0

      a combination of the above techniques may be used to attempt to bypass authentication

      In such an instance the tester should be aware of the potential for administrative account lockout if such functionality is present.

    • Gray-Box Testing

      WebSphere:

      /admin /admin-authz.xml /admin.conf /admin.passwd /admin/* /admin/logon.jsp /admin/secure/logon.jsp

      PHP:

      /phpinfo /phpmyadmin/ /phpMyAdmin/ /mysqladmin/ /MySQLadmin /MySQLAdmin /login.php /logon.php /xmlrpc.php /dbadmin

      FrontPage:

      /admin.dll /admin.exe /administrators.pwd /author.dll /author.exe /author.log /authors.pwd /cgi-bin

      WebLogic:

      /AdminCaptureRootCA /AdminClients /AdminConnections /AdminEvents /AdminJDBC /AdminLicense /AdminMain /AdminProps /AdminRealm /AdminThreads

      WordPress:

      wp-admin/ wp-admin/about.php wp-admin/admin-ajax.php wp-admin/admin-db.php wp-admin/admin-footer.php wp-admin/admin-functions.php wp-admin/admin-header.php

  • 2.6 Test HTTP Methods

    • Discover the Supported Methods

      When testing an application that has to accept other methods, e.g. a RESTful Web Service, test it thoroughly to make sure that all endpoints accept only the methods that they require.

      tool:

      nmap -p 443 --script http-methods --script-args http-methods.url-path='/index.php' localhost

    • Testing the PUT Method

      1. Capture the base request of the target with a web proxy.

      2. Change the request method to PUT and add test.html file and send the request to the application server.

        PUT /test.html HTTP/1.1
        Host: testing-website
        
        <html>
        HTTP PUT Method is Enabled
        </html>
      3. If the server response with 2XX success codes or 3XX redirections and then confirm by GET request for test.html file. The application is vulnerable.

      If the HTTP PUT method is not allowed on base URL or request, try other paths in the system.

      NOTE: If you are successful in uploading a web shell you should overwrite it or ensure that the security team of the target are aware and remove the component promptly after your proof-of-concept.

    • Testing for Access Control Bypass

      Find a page to visit that has a security constraint such that a GET request would normally force a 302 redirect to a log in page or force a log in directly. Issue requests using various methods such as HEAD, POST, PUT etc. as well as arbitrarily made up methods such as BILBAO, FOOBAR, CATS, etc. If the web application responds with a HTTP/1.1 200 OK that is not a log in page, it may be possible to bypass authentication or authorization. The following example uses Nmap’s ncat.

      $ ncat www.example.com 80
      HEAD /admin HTTP/1.1
      Host: www.example.com
      
      HTTP/1.1 200 OK
      Date: Mon, 18 Aug 2008 22:44:11 GMT
      Server: Apache
      Set-Cookie: PHPSESSID=pKi...; path=/; HttpOnly
      Expires: Thu, 19 Nov 1981 08:52:00 GMT
      Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
      Pragma: no-cache
      Set-Cookie: adminOnlyCookie1=...; expires=Tue, 18-Aug-2009 22:44:31 GMT; domain=www.example.com
      Set-Cookie: adminOnlyCookie2=...; expires=Mon, 18-Aug-2008 22:54:31 GMT; domain=www.example.com
      Set-Cookie: adminOnlyCookie3=...; expires=Sun, 19-Aug-2007 22:44:30 GMT; domain=www.example.com
      Content-Language: EN
      Connection: close
      Content-Type: text/html; charset=ISO-8859-1

      If the system appears vulnerable, issue CSRF-like attacks such as the following to exploit the issue more fully:

      • HEAD /admin/createUser.php?member=myAdmin

      • PUT /admin/changePw.php?member=myAdmin&passwd=foo123&confirm=foo123

      • CATS /admin/groupEdit.php?group=Admins&member=myAdmin&action=add

      Using the above three commands, modified to suit the application under test and testing requirements, a new user would be created, a password assigned, and the user made an administrator, all using blind request submission.

    • Testing for HTTP Method Overriding

      Some web frameworks provide a way to override the actual HTTP method in the request by emulating the missing HTTP verbs passing some custom header in the requests. The main purpose of this is to circumvent some middleware (e.g. proxy, firewall) limitation where methods allowed usually do not encompass verbs such as PUT or DELETE. The following alternative headers could be used to do such verb tunneling:

      • X-HTTP-Method

      • X-HTTP-Method-Override

      • X-Method-Override

      In order to test this, in the scenarios where restricted verbs such as PUT or DELETE return a “405 Method not allowed”, replay the same request with the addition of the alternative headers for HTTP method overriding, and observe how the system responds. The application should respond with a different status code (e.g. 200) in cases where method overriding is supported.

      The web server in the following example does not allow the DELETE method and blocks it:

      $ ncat www.example.com 80
      DELETE /resource.html HTTP/1.1
      Host: www.example.com
      
      HTTP/1.1 405 Method Not Allowed
      Date: Sat, 04 Apr 2020 18:26:53 GMT
      Server: Apache
      Allow: GET,HEAD,POST,OPTIONS
      Content-Length: 320
      Content-Type: text/html; charset=iso-8859-1
      Vary: Accept-Encoding

      After adding the X-HTTP-Header, the server responds to the request with a 200:

      $ ncat www.example.com 80
      DELETE /resource.html HTTP/1.1
      Host: www.example.com
      X-HTTP-Method: DELETE
      
      HTTP/1.1 200 OK
      Date: Sat, 04 Apr 2020 19:26:01 GMT
      Server: Apache
  • 2.7 Test HTTP Strict Transport Security

    The use of this header by web applications must be checked to find if the following security issues could be produced:

    • Attackers sniffing the network traffic and accessing the information transferred through an un-encrypted channel.

    • Attackers exploiting a manipulator in the middle attack because of the problem of accepting certificates that are not trusted.

    • Users who mistakenly entered an address in the browser putting HTTP instead of HTTPS, or users who click on a link in a web application which mistakenly indicated use of the HTTP protocol.

    # How to Test
    $ curl -s -D- <https://owasp.org> | grep -i strict
    # output be like ... Strict-Transport-Security: max-age=31536000
  • 2.8 Test RIA Cross Domain Policy

    To test for RIA policy file weakness the tester should try to retrieve the policy files crossdomain.xml and clientaccesspolicy.xml from the application’s root, and from every folder found.

    For example, if the application’s URL is http://www.owasp.org, the tester should try to download the files http://www.owasp.org/crossdomain.xml and http://www.owasp.org/clientaccesspolicy.xml.

    After retrieving all the policy files, the permissions allowed should be be checked under the least privilege principle. Requests should only come from the domains, ports, or protocols that are necessary. Overly permissive policies should be avoided. Policies with * in them should be closely examined.

    Example :

    <cross-domain-policy> <allow-access-from domain="*" /> </cross-domain-policy>

    Result Expected :

    • A list of policy files found.

    • A list of weak settings in the policies.

  • 2.9 Test File Permission

    A clear example is an execution file that is executable by unauthorized users. For another example, account information or a token value to access an API - increasingly seen in modern web services or microservices - may be stored in a configuration file whose permissions are set to world-readable from the installation by default. Such sensitive data can be exposed by internal malicious actors of the host or by a remote attacker who compromised the service with other vulnerabilities but obtained only a normal user privilege.

    In Linux, use ls command to check the file permissions. Alternatively, namei can also be used to recursively list file permissions.

    $ namei -l /PathToCheck/

    The files and directories that require file permission testing include but are not limited to:

    • Web files/directory

    • Configuration files/directory

    • Sensitive files (encrypted data, password, key)/directory

    • Log files (security logs, operation logs, admin logs)/directory

    • Executables (scripts, EXE, JAR, class, PHP, ASP)/directory

    • Database files/directory

    • Temp files /directory

    • Upload files/directory

  • 2.10 Test for Subdomain Takeover

    • Black-Box Testing

      The first step is to enumerate the victim DNS servers and resource records. There are multiple ways to accomplish this task, for example DNS enumeration using a list of common subdomains dictionary, DNS brute force or using web search engines and other OSINT data sources.

      Using the dig command the tester looks for the following DNS server response messages that warrant further investigation:

      • NXDOMAIN

      • SERVFAIL

      • REFUSED

      • no servers could be reached.

      • Testing DNS A, CNAME Record Subdomain Takeover

        Perform a basic DNS enumeration on the victim’s domain (victim.com) using dnsrecon:

        $ ./dnsrecon.py -d victim.com [*] Performing General Enumeration of Domain: victim.com ... [-] DNSSEC is not configured for victim.com [*] A subdomain.victim.com 192.30.252.153 [*] CNAME subdomain1.victim.com fictioussubdomain.victim.com ...

        Identify which DNS resource records are dead and point to inactive/not-used services. Using the dig command for the CNAME record:

        $ dig CNAME fictioussubdomain.victim.com ; <<>> DiG 9.10.3-P4-Ubuntu <<>> ns victim.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 42950 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

        The following DNS responses warrant further investigation: NXDOMAIN.

        To test the A record the tester performs a whois database lookup and identifies GitHub as the service provider:

        $ whois 192.30.252.153 | grep "OrgName" OrgName: GitHub, Inc.

        The tester visits subdomain.victim.com or issues a HTTP GET request which returns a “404 - File not found” response which is a clear indication of the vulnerability.

      • Testing NS Record Subdomain Takeover

        Identify all nameservers for the domain in scope:

        $ dig ns victim.com +short ns1.victim.com nameserver.expireddomain.com

        In this fictious example the tester checks if the domain expireddomain.com is active with a domain registrar search. If the domain is available for purchase the subdomain is vulnerable.

        The following DNS responses warrant further investigation: SERVFAIL or REFUSED.

    • Gray-Box Testing

      The tester has the DNS zone file available which means DNS enumeration is not necessary. The testing methodology is the same.

  • 2.11 Test Cloud Storage

    First identify the URL to access the data in the storage service, and then consider the following tests:

    • read the unauthorized data

    • upload a new arbitrary file

    You may use curl for the tests with the following commands and see if unauthorized actions can be performed successfully.

    To test the ability to read an object:

    curl -X GET https://<cloud-storage-service>/<object>

    To test the ability to upload a file:

    curl -X PUT -d 'test' 'https://<cloud-storage-service>/test.txt'

    • Testing for Amazon S3 Bucket Misconfiguration

      The Amazon S3 bucket URLs follow one of two formats, either virtual host style or path-style.

      • Virtual Hosted Style Access

        https://bucket-name.s3.Region.amazonaws.com/key-name

        In the following example, my-bucket is the bucket name, us-west-2 is the region, and puppy.png is the key-name:

        https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png

      • Path-Style Access

        https://s3.Region.amazonaws.com/bucket-name/key-name

        As above, in the following example, my-bucket is the bucket name, us-west-2 is the region, and puppy.png is the key-name:

        https://s3.us-west-2.amazonaws.com/my-bucket/puppy.jpg

        For some regions, the legacy global endpoint that does not specify a region-specific endpoint can be used. Its format is also either virtual hosted style or path-style.

      • Virtual Hosted Style Access

        https://bucket-name.s3.amazonaws.com

      • Path-Style Access

        https://s3.amazonaws.com/bucket-name

    • Identify Bucket URL

      For black-box testing, S3 URLs can be found in the HTTP messages. The following example shows a bucket URL is sent in the img tag in a HTTP response.

      ... <img src="<https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png>"> ...

      For gray-box testing, you can obtain bucket URLs from Amazon’s web interface, documents, source code, or any other available sources.

    • Testing with AWS-CLI

      In addition to testing with curl, you can also test with the AWS Command-line tool. In this case s3:// protocol is used.

      • List

        The following command lists all the objects of the bucket when it is configured public.

        aws s3 ls s3://<bucket-name>

      • Upload

        The following is the command to upload a file

        aws s3 cp arbitrary-file s3://bucket-name/path-to-save

        This example shows the result when the upload has been successful.

        $ aws s3 cp test.txt s3://bucket-name/test.txt upload: ./test.txt to s3://bucket-name/test.txt

        This example shows the result when the upload has failed.

        $ aws s3 cp test.txt s3://bucket-name/test.txt upload failed: ./test2.txt to s3://bucket-name/test2.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

      • Remove

        The following is the command to remove an object

        aws s3 rm s3://bucket-name/object-to-remove

Last updated