Bibliographic Information

Preventing Web Attacks with Apache

By: Ryan C. Barnett

Publisher: Addison-Wesley Professional

Pub. Date: January 27, 2006

Print ISBN-10: 0-321-32128-6

Print ISBN-13: 978-0-321-32128-2

B. Apache Module Listing

Module Name

Description

Security Risk

Recommend

Mod_mmap_static

Maps identified web pages directly into memory for fast access speeds.

Minimal.

Disable.

Mod_vhost_alias

Creates dynamically configured virtual hosts, by allowing the IP address and/or the Host: header of the HTTP request to be used as part of the pathname to determine what files to serve.

Minimal.

Disable.

Mod_bandwidth

Enables server-wide or per-connection bandwidth limits, based on the directory, size of files, and remote IP/domain.

Minimal. Will not significantly assist with denial of service attacks.

Disable.

Mod_throttle

Intended to reduce the load on your server and bandwidth generated by popular virtual hosts, directories, locations, or users according to supported polices that decide when to delay or refuse requests. Also mod_throttle can track and throttle incoming connections by IP address or by authenticated remote user.

Minimal. Will not significantly assist with denial of service attacks.

Disable.

Mod_env

This module allows for control of the environment that will be provided to CGI scripts and SSI pages. Environment variables may be passed from the shell that invoked the httpd process. Alternatively, environment variables may be set or unset within the configuration process.

Enabling CGI and SSI within the httpd server may imply a significant security impact; however, the addition of mod_env is unlikely to increase the security risk significantly.

Enable if you are using the CGI scripts for ErrorDocuments; otherwise, disable.

Mod_log_config

Provides for logging of the requests made to the server, using the Common Log Format or a user-specified format.

Server logging provides useful statistical and security functionality on the web server. See the section on auditing below for a discussion on log management.

Enable; configure to use common log format.

Mod_log_agent

Provides logging of the client user agents.

Server logging provides useful statistical and security functionality on the web server. See the section on auditing for a discussion on log management.

Disable; use log_config instead.

Mod_log_referer

Provides logging of the referer page.

Server logging provides useful statistical and security functionality on the web server. See the section on auditing for a discussion on log management.

Disable; use log_config instead.

Mod_mime_magic

Determines the MIME type of a file by looking at a few bytes of its contents. This provides functionality over and above mod_mime.

Minimal. This does not significantly affect server security, but allows the mime-type of files to be correctly sent to the web browser client.

Disable by default, but enable subject to web server requirements if mod_mime is insufficient.

Mod_mime

Determines the MIME type of a file by looking at the file extension.

Minimal. This does not significantly affect server security, but allows the mime-type of files to be correctly sent to the web browser client.

Enable. This module is normally an essential prerequisite for normal operation.

Mod_negotiation

Provides a content negotiation capability for web data. Content negotiation is the selection of the document (or image) that best matches the client’s capabilities, from one of several available documents. An example would be where three different languages are supported by three (otherwise identical) web pages. Web browsers that specify a preference for Spanish, for example, may be sent the Spanish language version, while English language speakers will receive the English version.

Minimal.

Disable unless you have an identified requirement for content negotiation.

Mod_status

This module provides information on server activity and performance through the meta-web page /server-status page.

The server-status page can provide potential attackers with useful information about your web server configuration, from which targeted attack profiles can be derived.

Disable. Use ACLs if you must implement. Note that although this module is normally active, most apache configurations disable the /server-status link elsewhere in the configuration file.

Mod_info

This module provides information on server activity and performance through the meta-web page /server-info page.

The server-info page can provide potential attackers with useful information about your web server configuration, from which targeted attack profiles can be derived.

Disable. Use ACLs if you must implement. Note that although this module is normally active, most Apache configurations disable the /server-info link elsewhere in the configuration file.

Mod_include

This module facilitates Server Side Includes (SSI). SSI are directives that are placed in HTML pages and evaluated on the server while the pages are being served. They let you add dynamically generated content to an existing HTML page, without having to serve the entire page via a CGI program, or other dynamic technology.

SSI facilitates the provision of dynamic content, which can potentially be the result of a server-side executable (shell / perl scripts, or other executables). Allowing the execution of applications from the web server increases the risk profile of the web server. Passing user input to external applications may further increase that risk.

Disable unless the site administration benefits clearly outweigh the potential risk of enabling SSI. It is recommended that code evaluation/ checking procedures be implemented for any applications that are called by an SSI-enabled page.

Mod_autoindex

Provides automatic index generation for directories within the webroot that do not have a default html page (for example, index.html).

Automatic index generation allows external users to see the entire contents of the directory. There are situations where this is appropriate, such as file archives. If there is an intention to rely on “security through obscurity” to protect web resources, then this feature should be disabled.

Disable.

Mod_dir

This module redirects users to either an appropriate “index.html” file, or an automatically generated index (via autoindex) when a user requests a URL with a trailing slash character.

This form of redirection is an accepted part of the normal operation of a web server. The security implications are minimal.

Disable.

Mod_cgi

This module facilitates the execution of external applications, generally in order to provide dynamic content to a web page.

CGI facilitates the provision of dynamic content, which can potentially be the result of a server-side executable (shell / perl scripts, or other executables). Allowing the execution of applications from the web server increases the risk profile of the web server. Passing user input to external applications may further increase that risk. Significant web server vulnerabilities have resulted from bugs in CGI code in the past.

Disable unless you are using CGI scripts for ErrorDocuments. It is recommended that code evaluation/checking procedures be implemented for any applications that are called by a CGI-enabled page.

Mod_asis

This module facilitates the provision of a particular file via HTTP, without prepending HTTP headers that are a normal part of the file delivery. Files can therefore include their own custom HTTP headers.

Minimal.

Disable, unless there is a requirement for custom headers.

Mod_imap

This module facilitates server-side image-map processing.

Minimal.

Disable unless required.

Mod_actions

This module provides for executing CGI scripts based on media type or request method—for example, a CGI script can be run whenever a file of a certain type is requested.

CGI facilitates the provision of dynamic content, which can potentially be the result of a server-side executable (shell / perl scripts, or other executables). Allowing the execution of applications from the web server increases the risk profile of the web server. Passing user input to external applications may further increase that risk. Significant web server vulnerabilities have resulted from bugs in CGI code in the past.

Disable unless you have a specific requirement, and the benefits clearly outweigh the potential risk of enabling CGI. It is recommended that code evaluation/checking procedures be implemented for any applications that are called by a CGI-enabled page.

Mod_spelling

This module attempts to correct misspellings of URLs that users might have entered, by ignoring capitalization and by allowing up to one misspelling.

Minimal.

Disable.

Mod_userdir

This module allows Apache to include within the web directory hierarchy, a specific directory within the home directories of local system users.

Users can create a directory (such as public_html) within their home directories. With the addition of mod_userdir, apache will look within this directory when a request in the format of http://localhost/~username is received. Files within user directories are generally outside the control of the normal site webmaster, and if CGI/SSI is used, can also be outside the control of the site security administrator.

Disable, unless there is a clear benefit to be gained, and only as a result of a risk assessment.

Mod_alias

This module allows an administrator to maintain multiple document stores, under different directory hierarchies, and map them into the web document tree. For example, although the default document root may be /www, the /data/applications/executables could be mapped to the /apps directory in the web tree. As such, a request for http://localhost/index.html would go to /www/ index.html on the file system, whereas a request for http://localhost/apps/ index.html, would go to /data/applications/executables/index.html on the file system.

Minimal.

Enable.

Mod_rewrite

Mod_rewrite is a complex module that provides a rule-based URL-rewriting facility. Mod_rewrite is particularly useful when a site upgrade leads to changes in URL locations, but the site wishes to allow users to retain their normal bookmarks, and still be able to get to the new information.

Mod_rewrite has no significant security implications.

Enable. It allows for filtering of identified malicious requests.

Mod_access

Provides access control based on client hostname, IP address, or other characteristics of the client request.

Mod_access provides access control based only on information provided by the connection layer, or the client browser. It is recommended that mod_access be used for access control only where the organization has control over the data provided. For example, access control by IP address is likely to be inappropriate for Internet connections, where the security administrator has no control over the IP address. Access control by IP address may be more appropriate for internal networks where address allocation and network monitoring facilitate a reduced risk profile.

Enable for use with ACLs (IP, network names, and hostnames).

Mod_auth

This module allows the use of HTTP Basic Authentication to restrict access by looking up users in plain-text password and group files. Similar functionality and greater scalability is provided by mod_auth_dbm and mod_auth_db. HTTP Digest Authentication is provided by mod_auth_digest.

Mod_auth provides a very basic authentication and access control facility that is usually difficult to administer for large volumes of users. Basic UNIX ‘crypt’ format passwords are used, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Enable for user ACLs. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication or LDAP authentication. If mod_auth is used, consider using a specific, designated authentication file outside the normal web document tree, rather than the alternative .htaccess files within the document directory.

Mod_auth_anon

This module allows the use of HTTP Basic Authentication to restrict access by looking up users in plain-text password and group files. Similar functionality and greater scalability is provided by mod_auth_dbm and mod_auth_db. HTTP Digest Authentication is provided by mod_auth_digest.

Mod_auth provides a very basic authentication and access control facility that is usually difficult to administer for large volumes of users. Basic UNIX ‘crypt’ format passwords are used, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication, LDAP authentication, or ssh authentication using mod_auth_any. If mod_auth is used, consider using a specific, designated authentication file outside the normal web document tree, rather than the alternative .htaccess files within the document directory.

Mod_auth_db

This module allows the use of Berkeley database files for authentication purposes.

Mod_auth_db provides a very basic authentication and access control facility that is usually difficult to administer for large volumes of users. Basic UNIX ‘crypt’ format passwords are used within the DB file, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication, LDAP authentication, or ssh authentication using mod_auth_any. If mod_auth_db is used, consider using a specific, designated authentication file outside the normal web document tree, rather than the alternative .htaccess files within the document directory.

Mod_auth_any

This module allows the use of an arbitrary command-line tool to authenticate a user.

Mod_auth_any is a powerful authentication facility that enables apache to utilize external user databases (such as LDAP directories, or potentially even Windows 2000 active directory) to authenticate users against provided authentication details.

Disable by default. If authentication details need to be synchronized with an external database, consider using this functionality. Note that the supplied username and password are passed as command-line arguments to the indicated authentication application. As such, users on the local system may potentially pick up the authentication information using the ‘ps’ command. Applications that verify the authentication information should also be evaluated in the context of buffer-overflow vulnerabilities, as the supplied userid/password may potentially contain overflow code. If mod_auth_any is used, consider using a specific, designated authentication file outside the normal web document tree, rather than the alternative .htaccess files within the document directory.

Mod_auth_dbm

This module allows the use of Berkeley DBM files for authentication purposes.

Mod_auth_dbm provides a very basic authentication and access control facility that is usually difficult to administer for large volumes of users. Basic UNIX ‘crypt’ format passwords are used within the DBM file, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication or ssh authentication using mod_auth_any. If mod_auth_dbm is used, consider using a specific, designated authentication file outside the normal web document tree, rather than the alternative .htaccess files within the document directory.

Mod_auth_ldap

This module allows the use of an external LDAP database for authentication purposes.

Mod_auth_ldap provides authenticationa and authorization from external ldap databases.

Disable by default. Consider this authentication mechanism if the organization is interested in using an LDAP directory for authentication purposes.

Mod_auth_mysql

This module allows the use of an external MYSQL database for authentication purposes.

Mod_auth_mysql provides an authentication and access control facility. Basic UNIX ‘crypt’ format passwords are used within the database, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication, LDAP, or ssh authentication using mod_auth_any.

Mod_auth_pgsql

This module allows the use an external postgresql database for authentication purposes.

Mod_auth_pgsql provides an authentication and access control facility. Basic UNIX ‘crypt’ format passwords are used within the database, which could potentially be exported from /etc/passwd and /etc/shadow on UNIX systems to alleviate administration somewhat.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication, LDAP, or ssh authentication using mod_auth_any.

Mod_auth_digest

This module is similar to mod_auth, but allows the use of MD5 digest-encrypted passwords, rather than basic UNIX CRYPT passwords.

Mod_auth_digest provides an authentication and access control facility using MD5-encrypted passwords, as enabled on many recent Linux distributions.

Disable. If authentication is required, consider alternative authentication mechanisms, including certificate-based authentication, LDAP, or ssh authentication using mod_auth_any.

Mod_proxy

This module turns the apache web server into a web proxy server.

Care should be taken with the configuration of proxy servers, as if the intent is to facilitate internal organization access to external web sites, there is a risk that the reverse could be enabled, allowing Internet users to potentially browse internal web servers.

If this server is a normal web server, then this module is not required for the normal operation and it should be disabled. If this server is being used as a proxy or a reverse proxy, then this module must be enabled.

Mod_cern_mata

This module facilitates the inclusion of custom CERN header data when a web page is served to a client.

Minimal.

Disable. This module is not required for the normal operation of a web server.

Mod_expires

Facilitates the inclusion of custom expiry headers within web pages.

Minimal.

Disable. This module is not required for the normal operation of a web server.

Mod_headers

Facilitates the inclusion/ modification/removal of headers within web pages.

Minimal.

Enable. We will use this module to insert bogus headers to help obfuscate both our web server software version and our web architecture.

Mod_usertrack

Allows the web site administrator to track the actions of individual users on a web site using cookies.

It should be noted that it is a client/user choice whether to accept cookies from the site or not. As such, the data derived from this module should not be considered accurate or comprehensive.

Enable if you want to insert bogus cookies to emulate a different web server (i.e., ASPSESSIONIDGGQGQQXC for Microsoft-IIS).

Mod_example

This is an example module only, and should not be enabled on production servers.

Minimal.

Disable. This module is not required for the normal operation of a web server.

Mod_unique_id

This module generates a unique identifier for a URL that is (almost) guaranteed to be unique across a cluster of http servers.

Minimal.

Disable. For normal web server activity, even in a clustered environment, unique ids are not required.

Mod_setenvif

This module allows for control of the environment that will be provided to CGI scripts and SSI pages, based on attributes associated with the client HTTP request. Environment variables may be passed from the shell that invoked the httpd process. Alternatively, environment variables may be set or unset within the configuration process. Environment variables can be set, for example, only if the User-Agent string provided by the client matches “netscape.”

Enabling CGI and SSI within the httpd server may imply a significant security impact; however, the addition of mod_setenvif is unlikely to increase the security risk significantly.

The normal recommendation would be to disable this feature unless you have CGI/SSI enabled, and you have an identified requirement to pass specific, static, environment variables to your scripts based on items such as browser type/version. However, as the feature is used within most configuration files to force an HTTP 1.0 response (as opposed to HTTP 1.1) for older browser technology, the default for most web servers would be to enable this feature.

Libperl

This module allows a web author to embed a subset of the PERL language within a web page, to be acted upon by the web server prior to delivering HTML to the client.

Enabling any active scripting feature within the httpd server can increase the risk to the web server if external user input is acted upon by the script in question.

Disable this functionality unless you have a specific requirement for active scripting using the PERL language. Note that although executing the PERL script using CGI capabilities is an option, the PERL interpreter is executed each time the CGI script is loaded. Using embedded PERL via the PERL module only loads the interpreter once, therefore increasing average processing speed.

Mod_php Libphp3 Libphp4

This module allows a web author to embed PHP (personal home page) language components within a web page, to be acted upon by the web server prior to delivering HTML to the client.

Enabling any active scripting feature within the httpd server can increase the risk to the web server if external user input is acted upon by the script in question.

Disable this functionality unless you have a specific requirement for active scripting using the PHP language.

Libdav

This module implements DAV server capabilities within Apache. DAV is a collaborative web development environment that allows multiple authors to update web data in a controlled fashion.

DAV allows modification of web pages by remote users, and integrates into the default apache authentication and access control facilities. If DAV is enabled on a web server that also serves pages to the general public, consider either: 1) Using a reverse proxy server in front of the http server that blocks facilities such as “PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK” from non-internal sources, or 2) Using web-dav on an ‘acceptance’ server only, with changed data mirrored to the production (available to the internet) web server.

Disable this functionality unless you have a specific requirement for multiple users to update files. If DAV is required, analyze the risk to the infrastructure in the context of a risk assessment.

Mod_roaming

Mod_roaming allows the use of an apache server as a Netscape Roaming Access server. This facilitates the storage of Netscape Communicator 4.5 preferences, bookmarks, address books, cookies, etc. on the server. Netscape Communicator web clients can be used to access and update the settings.

An HTTP server that implements Mod_roaming should generally be a special-purpose web server, only used for the storage/management of roaming profiles. Both read and write protocols are implemented to facilitate roaming profile capabilities.

Disable this functionality unless you exclusively utilize netscape clients with roaming-profile capabilities. It is recommended that this be used only for intranet clients unless an appropriate risk assessment has been conducted.

Libssl

The Apache SSL module facilitates the use of X.509 certificates to provide Secure-Sockets-Layer encryption (and potentially, authentication) capabilities to Apache.

Web pages served via HTTPS will increase the processing requirements of your system, but provide a level of confidentiality between client web browser and the web server.

Disable this functionality unless you require message confidentiality or authentication within an encrypted channel. Note that software or hardware x.509 authentication tokens can be supported with this module, assuming appropriate client-side infrastructure is in place.

Mod_put

This module supports uploads of web pages via the HTTP PUT method.

Write access to your server web pages should be carefully considered in the context of an appropriate risk assessment. If mod_put is enabled on a web server that also serves pages to the general public, consider either: 1) Using a reverse proxy server in front of the http server that blocks facilities such as “PUT” from non-internal sources, or 2) Using mod_put on an ‘acceptance’ server only, with changed data mirrored to the production (available to the internet) web server.

Disable this functionality unless you have a specific requirement for non-local users to update files.

Mod_python

This module allows a web author to embed a subset of the Python language within a web page, to be acted upon by the web server prior to delivering HTML to the client.

Enabling any active scripting feature within the httpd server can increase the risk to the web server if external user input is acted upon by the script in question.

Disable this functionality unless you have a specific requirement for active scripting using the Python language. Note that although executing the Python script using CGI capabilities is an option, the Python interpreter is executed each time the CGI script is loaded. Using embedded Python via the Python module only loads the interpreter once, therefore increasing average processing speed.

C. Example httpd.conf File

Code View: Scroll / Show All
##
## This file has been simplified (removing normal httpd.conf
## information) in order to make it easier for the reader to identify
## the security settings.
##
## You should modify this file appropriately for your environment.
##
##########################################
### Server-Oriented General Directives ###
##########################################
ServerType standalone
ServerRoot "/var/www"
DocumentRoot "/var/www/htdocs"
ServerName www.companyx.com
HostnameLookups On
Port 80
##########################################
########################################
### User-Oriented General Directives ###
########################################
User webserv
Group webserv
ServerAdmin webmaster@companyx.com
########################################
PidFile /var/www/logs/httpd.pid
ScoreBoardFile /var/www/logs/httpd.scoreboard
#########################################
### DoS Protective General Directives ###
#########################################
Timeout 60
KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 100
MinSpareServers 10
MaxSpareServers 20
StartServers 10
MaxClients 2048
MaxRequestsPerChild 0
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 1
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
DOSEmailNotify root
#########################################
##########################################
### Buffer Overflow General Directives ###
##########################################
LimitRequestBody 10240
LimitRequestFields 40
LimitRequestFieldsize 1000
LimitRequestline 500
CoreDumpDirectory /var/www/logs
##########################################
###############################################
### Software Obfuscation General Directives ###
###############################################
ServerTokens Prod
ServerSignature Off
ErrorDocument 404 /custom404.html
ErrorDocument 400 /cgi-bin/400.cgi
ErrorDocument 401 /cgi-bin/401.cgi
ErrorDocument 403 /cgi-bin/403.cgi
ErrorDocument 405 /cgi-bin/405.cgi
ErrorDocument 406 /cgi-bin/406.cgi
ErrorDocument 409 /cgi-bin/409.cgi
ErrorDocument 413 /cgi-bin/413.cgi
ErrorDocument 414 /cgi-bin/414.cgi
ErrorDocument 500 /cgi-bin/500.cgi
ErrorDocument 501 /cgi-bin/501.cgi
###############################################
##########################
### Mod_Rewrite VooDoo ###
##########################
RewriteEngine On
RewriteLog /var/www/logs/rewrite.log
RewriteLogLevel 2
RewriteRule [^a-zA-Z0-9|\.|/|_|-] - [F]
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
#########################
############################
### IDS/Honeypot Options ###
############################
#
# This next section will deny attempts to access common CGI directories.
#
deny from all
#
# This next section will deny attempts to access common CGI files.
# This is an alternative to actually creating fake cgi scripts.
#
deny from all
deny from all
############################
####################
### Fake Headers ###
####################
Header set Via "1.1 squid.proxy.companyx.com (Squid/2.4.STABLE6)"
Header set X-Powered-By "ASP.NET"
####################
###################################
### Mod_Security IDS Directives ###
###################################
# Turn the filtering engine On or Off
SecFilterEngine On
# Make sure that URL encoding is valid
SecFilterCheckURLEncoding On
# Make sure the Unicode encoding is valid
SecFilterCheckUnicodeEncoding On
# Only allow bytes from this range
SecFilterForceByteRange 32 126
# The audit engine works independently and
# can be turned On of Off on the per-server or
# on the per-directory basis
SecAuditEngine On
# The name of the audit log file
SecAuditLog logs/audit_log
SecFilterDebugLog logs/modsec_debug_log
SecFilterDebugLevel 0
# Should mod_security inspect POST payloads
SecFilterScanPOST On
# Action to take by default
SecFilterDefaultAction "deny,log,status:403"
# Prevent OS-specific keywords
SecFilter /etc/password
# Prevent path traversal (..) attacks
SecFilter "\.\./"
# Weaker XSS protection but allows common HTML tags
SecFilter "<( |\n)*script"
# Prevent XSS attacks (HTML/Javascript injection)
SecFilter "<(.|\n)+>"
# Very crude filters to prevent SQL injection attacks
SecFilter "delete[[:space:]]+from"
SecFilter "insert[[:space:]]+into"
SecFilter "select.+from"
# Require HTTP_USER_AGENT and HTTP_HOST headers
SecFilterSelective "HTTP_USER_AGENT|HTTP_HOST" "^$"
# Restrict cgi-bin access to allow ONLY the following files:
# - 4XX.cgi and 5XX.cgi Error Scripts
# - List any valid cgi scripts
# Any request for files other than those listed will be denied
SecFilter "!(4..\.cgi|5..\.cgi|valid1\.cgi|valid2\.pl)"
include conf/snortmodsec-rules.txt
##########################
Listen 80
Listen 443
Options None
AllowOverride None
Order deny,allow
Deny from all
<Directory "/var/www/htdocs">
deny from all
Options -FollowSymLinks -Includes -Indexes -MultiViews
AllowOverride None
Order allow,deny
Allow from all
AuthType Basic
AuthName "Private Access Test"
AuthUserFile /var/www/conf/passwd
Require user test
UserDir public_html
DirectoryIndex index.html
AccessFileName .htaccess
Order allow,deny
Deny from all
Satisfy All
UseCanonicalName On
TypesConfig /var/www/conf/mime.types
DefaultType text/plain
MIMEMagicFile /var/www/conf/magic
##################################
### Logging General Directives ###
##################################
ErrorLog syslog
LogLevel debug
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Accept}i\" \"%{Accept-Encoding}i\"
\"%{Host}i\" \"%{Connection}i\" \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
CustomLog /var/www/logs/access_log common
##################################
Alias /icons/ "/var/www/icons/"
<Directory "/var/www/icons">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
Alias /manual/ "/var/www/htdocs/manual/"
<Directory "/var/www/htdocs/manual">
Options Indexes FollowSymlinks MultiViews
AllowOverride None
Order allow,deny
Allow from all
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
<Directory "/var/www/cgi-bin">
AllowOverride None
Options None
Order allow,deny
Allow from all
IndexOptions FancyIndexing
AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip
AddIconByType (TXT,/icons/text.gif) text/*
AddIconByType (IMG,/icons/image2.gif) image/*
AddIconByType (SND,/icons/sound2.gif) audio/*
AddIconByType (VID,/icons/movie.gif) video/*
AddIcon /icons/binary.gif .bin .exe
AddIcon /icons/binhex.gif .hqx
AddIcon /icons/tar.gif .tar
AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv
AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip
AddIcon /icons/a.gif .ps .ai .eps
AddIcon /icons/layout.gif .html .shtml .htm .pdf
AddIcon /icons/text.gif .txt
AddIcon /icons/c.gif .c
AddIcon /icons/p.gif .pl .py
AddIcon /icons/f.gif .for
AddIcon /icons/dvi.gif .dvi
AddIcon /icons/uuencoded.gif .uu
AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl
AddIcon /icons/tex.gif .tex
AddIcon /icons/bomb.gif core
AddIcon /icons/back.gif ..
AddIcon /icons/hand.right.gif README
AddIcon /icons/folder.gif ^^DIRECTORY^^
AddIcon /icons/blank.gif ^^BLANKICON^^
DefaultIcon /icons/unknown.gif
ReadmeName README
HeaderName HEADER
AddEncoding x-compress Z
AddEncoding x-gzip gz tgz
AddLanguage da .dk
AddLanguage nl .nl
AddLanguage en .en
AddLanguage et .ee
AddLanguage fr .fr
AddLanguage de .de
AddLanguage el .el
AddLanguage he .he
AddCharset ISO-8859-8 .iso8859-8
AddLanguage it .it
AddLanguage ja .ja
AddCharset ISO-2022-JP .jis
AddLanguage kr .kr
AddCharset ISO-2022-KR .iso-kr
AddLanguage nn .nn
AddLanguage no .no
AddLanguage pl .po
AddCharset ISO-8859-2 .iso-pl
AddLanguage pt .pt
AddLanguage pt-br .pt-br
AddLanguage ltz .lu
AddLanguage ca .ca
AddLanguage es .es
AddLanguage sv .sv
AddLanguage cz .cz
AddLanguage ru .ru
AddLanguage zh-tw .tw
AddLanguage tw .tw
AddCharset Big5 .Big5 .big5
AddCharset WINDOWS-1251 .cp-1251
AddCharset CP866 .cp866
AddCharset ISO-8859-5 .iso-ru
AddCharset KOI8-R .koi8-r
AddCharset UCS-2 .ucs2
AddCharset UCS-4 .ucs4
AddCharset UTF-8 .utf8
LanguagePriority en da nl et fr de el it ja kr no pl pt pt-br ru ltz ca es sv tw
AddType application/x-tar .tgz
AddType image/x-icon .ico
BrowserMatch "Mozilla/2" nokeepalive
BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0
BrowserMatch "RealPlayer 4\.0" force-response-1.0
BrowserMatch "Java/1\.0" force-response-1.0
BrowserMatch "JDK/1\.0" force-response-1.0
AddType application/x-x509-ca-cert .crt
AddType application/x-pkcs7-crl .crl
SSLPassPhraseDialog builtin
SSLSessionCache dbm:/var/www/logs/ssl_scache
SSLSessionCacheTimeout 300
SSLMutex file:/var/www/logs/ssl_mutex
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
SSLLog /var/www/logs/ssl_engine_log
SSLLogLevel info
DocumentRoot "/var/www/htdocs"
ServerName hostname.companyx.com
ServerAdmin root@ hostname.companyx.com
ErrorLog /var/www/logs/error_log
TransferLog /var/www/logs/access_log
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /var/www/conf/ssl.crt/server.crt
SSLCertificateKeyFile /var/www/conf/ssl.key/server.key
SSLOptions +StdEnvVars
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
SetEnvIf User-Agent ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
CustomLog /var/www/logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

Authentication Setup

Setting up user authentication takes two steps. First, you create a file containing the usernames and passwords. Second, you tell the server what resources are to be protected and which users are allowed (after entering a valid password) to access them.
There are two forms of authentication:
  • Basic (Must have Mod_Auth implemented).
    Client’s web browser sends MIME base64-encoded user credentials (username + password) to the web server when the browser receives a “401—Authorization Required” status code. Basic Authentication is easy to implement, but does not provide any real security against sniffing attacks.
  • Digest (Must have Mod_Digest implemented).
    This makes sending passwords across the Internet more secure. It effectively encrypts the password before it is sent such that the server can decrypt it. It works exactly the same as Basic authentication as far as the end-user and server administrator are concerned. The use of Digest authentication will depend on whether browser authors write it into their products. While Digest authentication does help with protecting the user’s credentials, it does not protect the data itself. You should implement SSL if you need to protect sensitive data in transit.
Make sure the password file containing user credentials is NOT stored within the DocumentRoot directory! If this happens, clients may be able to access this file and view the data. If you need to restrict access to a directory or file, use the following commands:
For Basic authentication:
# htpasswd -c /path/to/passwordfile test
New password: password
Re-type new password: password
Adding password for user test
Within the httpd.conf file, add an entry to protect the desired content:
AuthType Basic
AuthName "Private Access"
AuthUserFile /path/to/passwordfile
Require user test
For Digest authentication:
# htdigest -c /path/to/digestfile "Private Access" test
New password: password
Re-type new password: password
Adding password for user test
Within the httpd.conf file, add an entry to protect the desired content:
AuthType Digest
AuthName "Private Access"
AuthDigestFile /path/to/digestfile
Require user test

Access Control: Where Clients Come From

There are two options for controlling access based on where the client is connecting from. They are the Allow and Deny directives. These are pretty straightforward. The Allow directive grants access, while the Deny directive denies access. The determination is based on one of the following parameters:
  • Hostname or Domain
  • IP Address or IP Range
  • Client Request ENV

Hostname or Domain

Here is an example of setting access control restrictions on the DocumentRoot to only allow clients from www.apache.org and from the .apache.org domain:
<Directory "/usr/local/apache/htdocs">
Order allow,deny
Deny from all
Allow from www.apache.org
Allow from .apache.org
In the previous configuration shown, it is important to point out that the “.” in the domain name does matter! For example, this configuration would deny access to someone coming from the fooapache.org domain; however, it would allow someone coming from the foo.apache.org domain.
If you plan to restrict access based on either the hostname or domain, there are a few issues to note, most notably that Apache will perform a double reverse DNS lookup on all client access attempts regardless of the HostnameLookups directive. If you are concerned about the overhead associated with hostname resolution and therefore turned off HostnameLookups, then you should not utilize hostnames or domain names for access control. The other potential security issue involved with using hostnames for access control is the possibility of some sort of DNS spoofing or poisoning attack. If successful, the Apache server may allow access to data that should not have been allowed based on bogus DNS resolution.

IP Address and IP Range

Controlling access based on the client IP address or IP range is identical in syntax to using hostnames or domain names. The advantages to using IP addresses are that there is no overhead that is normally associated with hostname lookups, and it alleviates the possibility of a DNS-based attack. Here is an example that accomplishes the same goal as the one shown previously by allowing www.apache.org and the .apache.org domain.
<Directory "/usr/local/apache/htdocs">
Order allow,deny
Deny from all
Allow from 209.237.227.195
Allow from 209.237.

Client Request ENV

Apache can also control access based on the value of environment variables. This allows for flexible control based on the characteristics of the connection. As opposed to the previously listed access control options of a hostname or IP address, the parameters are either allow from env= or deny from env=. Before these directives can be utilized, the environmental token of interest must be identified and marked with the SetEnvIf directive. For instance, let’s say that we wanted to only allow access to our web server from a client who was using a specific User-Agent application called “Secret-Agent.” This would be accomplished by the following directives:
SetEnvIF User-Agent ^Secret-Agent$ pass
<Directory "/usr/local/apache/htdocs">
Order allow,deny
Deny from all
Allow from env=pass

Protecting the Root Directory

This section aims to clarify information provided on the Apache web site in relation to security settings recommended to protect the root directory from access by Apache. Apache.org has provided a web page with “Security Tips” information to help new users. This page is located at the following URL: http://httpd.apache.org/docs-2.0/misc/security_tips.html#protectserverfiles. There is a section entitled, “Protect Server Files by Default.” Here is what it states:
One aspect of Apache, which is occasionally misunderstood, is the feature of default access. That is, unless you take steps to change it, if the server can find its way to a file through normal URL mapping rules, it can serve it to clients. For instance, consider the following example:
# cd /; ln -s / public_html
This would allow clients to walk through the entire filesystem. To work around this, add the following block to your server’s configuration:
Order Deny,Allow
Deny from all
This will forbid default access to file system locations. Add appropriate directory blocks to allow access only to those areas you wish.
This information is correct; however, it is a poor example. This example is utilizing the functionality of mod_userdir and aims to prevent users from accessing the root user’s home directory. If, however, Apache is configured to FollowSymLinks, then it will still be able to access the root directory regardless of the access control directives that you implement. If you want to protect the root directory from accesses by Apache, make sure that you do the following:
  • Disable FollowSymLinks in the DocumentRoot directory directive.
  • Do not enable the mod_userdir module.
  • If you must use mod_userdir for proper functionality, implement the Userdir disabled root directive.

Authorization

The Authorization section covers attacks that target a web site’s method of determining if a user, service, or application has the necessary permissions to perform a requested action. For example, many web sites should only allow certain users to access specific content or functionality. Other times, a user’s access to different resources might be restricted. Using various techniques, an attacker can fool a web site into increasing their privileges to protected areas.

Credential/Session Prediction

Credential/Session Prediction is a method of hijacking or impersonating a web site user. Deducing or guessing the unique value that identifies a particular session or user accomplishes the attack. Also known as Session Hijacking, the consequences could allow attackers the ability to issue web site requests with the compromised user’s privileges.
Many web sites are designed to authenticate and track a user when communication is first established. To do this, users must prove their identity to the web site, typically by supplying a username/password (credentials) combination. Rather than passing these confidential credentials back and forth with each transaction, web sites will generate a unique “session ID” to identify the user session as authenticated. Subsequent communication between the user and the web site is tagged with the session ID as “proof” of the authenticated session. If an attacker is able to predict or guess the session ID of another user, fraudulent activity is possible.
Credential/Session Prediction Example
Many web sites attempt to generate session IDs using proprietary algorithms. These custom methodologies might generate session IDs by simply incrementing static numbers. Or there could be more complex procedures such as factoring in time and other computer-specific variables.
The session ID is then stored in a cookie, hidden form-field, or URL. If an attacker can determine the algorithm used to generate the session ID, an attack can be mounted as follows:
1.
Attacker connects to the web application acquiring the current session ID.
2.
Attacker calculates or Brute Forces the next session ID.
3.
Attacker switches the current value in the cookie/hidden form-field/URL and assumes the identity of the next user.
Apache Countermeasures for Credential/Session Prediction Attacks
There are several protective measures that should be taken to ensure adequate protection of session IDs.
1.
Make sure to use SSL to prevent network sniffing of valid credentials.
2.
Add both the “secure” and “httponly” tokens to all SessionID cookies. These two cookie options will help to secure the credentials by forcing the user’s browser to only send this sensitive data over an SSL tunnel and also prevent scripts from accessing this data. The best solution for implementing this is to have the application developers update the code to include this parameter when generating/sending cookies to clients. It is possible, however, to have Apache add this token into the outbound cookie if you utilize Mod_Perl. You could implement a perl handler that can hook into the output filter of Apache with code such as this:
# read the cookie and append the secure parameter
my $r = Apache->request;
my $cookie = $r->header_in('Cookie'};
$cookie =~ s/SESSION_ID=(\w*)/$1; secure; httponly/;
3.
Also with Mod_Perl, you can implement the Apache::TicketAccess module that was highlighted in the book Writing Apache Modules with Perl and C by Lincoln Stein and Doug MacEachern. This module was designed to have the client authenticate once, and then it issued a hashed “ticket” that is checked on subsequent requests. The hash is generated based on the following data: the user’s name, IP address, an expiration date, and a cryptographic signature. This system provides increased security due to its use of the cryptographic signature and use of the client’s IP address for validation. Due to the popularity of proxy servers these days, you could also update the IP address token to only check the Class C range of the data instead of the full address or you could substitute the X_FORWARDED_FOR client header that is added by many proxies.
Beyond Apache mitigations, session IDs should meet the following criteria:
  1. Session IDs are random. Methods used to create secure session credentials should rely on cryptographically secure algorithms.
  2. Session IDs are large enough to thwart Brute Force attacks.
  3. Session IDs will expire after a certain length of time. (1–2 days).
  4. Session IDs are invalidated by both the client and server during log-out.
By following these guidelines, the risk to session ID guessing can be eliminated or minimized. Other ways to strengthen defenses against session prediction are as follows:
  • Require users to re-authenticate before performing critical web site operations.
  • Tying the session credential to the user’s specific IP addresses or partial IP range. Note: This may not be practical, particularly when Network Address Translation is in use.
  • It is generally best to use the session IDs generated by the JSP or ASP engine you are using. These engines have typically been scrutinized for security weaknesses, and they are not impervious to attacks; they do provide random, large session IDs. This is done in Java by using the Session object to maintain state, as shown here:
References
“iDefense: Brute-Force Exploitation of Web Application Session ID’s” By David Endler—iDEFENSE Labs www.cgisecurity.com/lib/SessionIDs.pdf
“Best Practices in Managing HTTP-Based Client Sessions” By Gunter Ollmann—X-Force Security Assessment Services EMEA www.itsecurity.com/papers/iss9.htm
“A Guide to Web Authentication Alternatives” By Jan Wolter www.unixpapa.com/auth/homebuilt.html

Insufficient Authorization

Insufficient Authorization is when a web site permits access to sensitive content or functionality that should require increased access control restrictions. When a user is authenticated to a web site, it does not necessarily mean that he should have full access to all content and that functionality should be granted arbitrarily.
Authorization procedures are performed after authentication, enforcing what a user, service, or application is permitted to do. Thoughtful restrictions should govern particular web site activity according to policy. Sensitive portions of a web site may need to be restricted to only allow an administrator.
Insufficient Authorization Example
In the past, many web sites have stored administrative content and/or functionality in hidden directories such as /admin or /logs. If an attacker were to directly request these directories, he would be allowed access. He may thus be able to reconfigure the web server, access sensitive information, or compromise the web site.
Apache Countermeasures for Insufficient Authentication
Similar to the issues raised in the previous section entitled “Insufficient Authentication,” you should implement authorization access controls in addition to the authentication restrictions. One way to restrict access to URLs is to implement host-based ACLs that will deny access attempts unless the client is coming from an approved domain or IP address range. We can update the ACL created previously for the “/admin/” directory like this:
<LocationMatch "^/admin/">
SSLRequireSSL
AuthType Digest
AuthName "Admin Area"
AuthDigestfile /usr/local/apache/conf/passwd_digest
Require user admin
Order Allow,Deny
Allow from .internal.domain.com
Deny from all
This would only allow connections from the “.internal.domain.com” name space. If an Internet client attempted to connect to this URL, they would be denied with a 403 Forbidden. Implementing these types of authorization restrictions is not difficult; however, the trick is identifying all of these sensitive locations. It is for this reason that you should run web vulnerability scanning software to help enumerate this data.
References
“iDefense: Brute-Force Exploitation of Web Application Session ID’s” By David Endler—iDEFENSE Labs www.cgisecurity.com/lib/SessionIDs.pdf

Insufficient Session Expiration

Insufficient Session Expiration is when a web site permits an attacker to reuse old session credentials or session IDs for authorization. Insufficient Session Expiration increases a web site’s exposure to attacks that steal or impersonate other users.
Because HTTP is a stateless protocol (meaning that it cannot natively associate different requests together), web sites commonly use session IDs to uniquely identify a user from request to request. Consequently, each session ID’s confidentiality must be maintained in order to prevent multiple users from accessing the same account. A stolen session ID can be used to view another user’s account or perform a fraudulent transaction.
The lack of proper session expiration may improve the likely success of certain attacks. For example, an attacker may intercept a session ID, possibly via a network sniffer or Cross-site Scripting attack. Although short session expiration times do not help if a stolen token is immediately used, they will protect against ongoing replaying of the session ID. In another scenario, a user might access a web site from a shared computer (such as at a library, Internet cafe, or open work environment). Insufficient Session Expiration could allow an attacker to use the browser’s back button to access web pages previously accessed by the victim.
A long expiration time increases an attacker’s chance of successfully guessing a valid session ID. The long length of time increases the number of concurrent and open sessions, which enlarges the pool of numbers an attacker might guess.
Insufficient Session Expiration Example
In a shared computing environment (more than one person has unrestricted physical access to a computer), Insufficient Session Expiration can be exploited to view another user’s web activity. If a web site’s logout function merely sends the victim to the site’s home page without ending the session, another user could go through the browser’s page history and view pages accessed by the victim. Because the victim’s session ID has not been expired, the attacker would be able to see the victim’s session without being required to supply authentication credentials.
Apache Countermeasures Against Insufficient Session Expiration
There are three main scenarios where session expiration should occur:
  • Forcefully expire a session token after a predefined period of time that is appropriate. The time could range from 30 minutes for a banking application to a few hours for email applications. At the end of this period, the user must be required to re-authenticate.
  • Forcefully expire a session token after a predefined period of inactivity. If a session has not received any activity during a specific period, then the session should be ended. This value should be less than or equal to the period of time mentioned in the previous step. This limits the window of opportunity available to an attacker to guess token values.
  • Forcefully expire a session token when the user actuates the log-out function. The browser’s session cookies should be deleted and the user’s session object on the server should be destroyed (this removes all data associated with the session, it does not delete the user’s data). This prevents “back button” attacks and ensures that a user’s session is closed when explicitly requested.
Apache has no built-in capability to control session expirations; therefore, you would need to implement a third-party module to handle this task. If you implement Mod_Perl, there are numerous modules available that may assist with this task. An example listing of a few modules are as follows:
  • Apache::TicketAccess
  • Apache::Session
  • CGI::Session
You could also make the move and use the Tomcat web server from the Apache Jakarta Project: http://jakarta.apache.org/tomcat. With Tomcat, you could utilize Java to manage/track user sessions.
References
“Dos and Don’ts of Client Authentication on the Web” By Kevin Fu, Emil Sit, Kendra Smith, Nick Feamster—MIT Laboratory for Computer Science http://cookies.lcs.mit.edu/pubs/webauth:tr.pdf

Session Fixation

Session Fixation is an attack technique that forces a user’s session ID to an explicit value. Depending on the functionality of the target web site, a number of techniques can be utilized to “fix” the session ID value. These techniques range from Cross-site Scripting exploits to peppering the web site with previously made HTTP requests. After a user’s session ID has been fixed, the attacker will wait for them to login. Once the user does so, the attacker uses the predefined session ID value to assume their online identity.
Generally speaking, there are two types of session management systems for ID values. The first type is “permissive” systems that allow web browsers to specify any ID. The second type is “strict” systems that only accept server-side generated values. With permissive systems, arbitrary session IDs are maintained without contact with the web site. Strict systems require the attacker to maintain the “trap-session” with periodic web site contact, preventing inactivity timeouts.
Without active protection against Session Fixation, the attack can be mounted against any web site that uses sessions to identify authenticated users. Web sites using session IDs are normally cookie-based, but URLs and hidden form-fields are used as well. Unfortunately, cookie-based sessions are the easiest to attack. Most of the currently identified attack methods are aimed toward the fixation of cookies.
In contrast to stealing a user’s session ID after they have logged into a web site, Session Fixation provides a much wider window of opportunity. The active part of the attack takes place before the user logs in.
Session Fixation Example
The Session Fixation attack is normally a three-step process:
1.
Session set-up. The attacker sets up a “trap-session” for the target web site and obtains that session’s ID. Or, the attacker may select an arbitrary session ID used in the attack. In some cases, the established trap session value must be maintained (kept alive) with repeated web site contact.
2.
Session fixation. The attacker introduces the trap session value into the user’s browser and fixes the user’s session ID.
3.
Session entrance. The attacker waits until the user logs into the target web site. When the user does so, the fixed session ID value will be used and the attacker may take over.
Fixing a user’s session ID value can be achieved with the techniques described in the following sections.
Issuing a New Session ID CookieValue Using a Client-Side Script
A Cross-site Scripting vulnerability present on any web site in the domain can be used to modify the current cookie value, as shown in the following code snippet:
Code View: Scroll / Show All
http://example/ .idc
Issuing a Cookie Using the META Tag
This method is similar to the previous one, but also effective when Cross-site Scripting countermeasures prevent the injection of HTML script tags, but not meta tags. This can be seen in the following code snippet.
http://example/<meta%20http-equiv=Set-Cookie%20
content="sessionid=1234;%20domain=.example.dom">.idc
Issuing a Cookie Using an HTTP Response Header
The attacker forces either the target web site, or any other site in the domain, to issue a session ID cookie. This can be achieved in the following ways:
  • Breaking into a web server in the domain (e.g., a poorly maintained WAP server).
  • Poisoning a user’s DNS server, effectively adding the attacker’s web server to the domain.
  • Setting up a malicious web server in the domain (e.g., on a workstation in Windows 2000 domain; all workstations are also in the DNS domain).
  • Exploiting an HTTP response splitting attack.

NOTE

A long-term Session Fixation attack can be achieved by issuing a persistent cookie (e.g., expiring in 10 years), which will keep the session fixed even after the user restarts the computer, as shown here:
Code View: Scroll / Show All
http://example/ .idc
Apache Countermeasures for Session Fixation Attacks
There are three different approaches to take for mitigating Session Fixation attacks:
  1. Session set-up.
  2. Session fixation.
  3. Session entrance.
Session Set-Up
In this phase, the attacker needs to obtain a valid session ID from the web application. If the application only sends this session ID information after successfully logging in, then the pool of possible attackers can be reduced to those who already have an account.
If the web application does provide a session ID prior to successful login, then it may still be possible to identify an attacker who is enumerating the session ID characteristics. In this circumstance, the attacker usually will try to gather a large number of session IDs for evaluation purposes to see if they can potentially predict a future value. During this gathering phase, their scanning applications will most likely trigger Mod_Dosevasive, thus alerting security personnel.
Session Fixation
During this phase, the attacker needs to somehow inject the desired session ID into the victim’s browser. We can mitigate these issues by implementing a few Mod_Security filters, which will block these injection attacks:
# Weaker XSS protection but allows common HTML tags
SecFilter "<[[:space:]]*script"
# Prevent XSS atacks (HTML/Javascript injection)
SecFilter "<.+>"
# Block passing Cookie/SessionIDs in the URL
SecFilterSelective THE_REQUEST "(document\.cookie|Set-Cookie|SessionID=)"
Session Entrance
When a client accesses the login URL, any session ID token provided by the client’s browser should be ignored, as the web application should generate a new one. You can add the following Apache RequestHeader directive to remove these un-trusted tokens:
RequestHeader unset SessionID
The session ID that is generated by the web application should include a token that identifies the client’s IP address. If the client IP address does not match what is stored in the session ID, then the client should be forced to re-authenticate.
References
“Session Fixation Vulnerability in Web-based Applications” By Mitja Kolsek—Acros Security www.acrossecurity.com/papers/session_fixation.pdf
“Divide and Conquer” By Amit Klein—Sanctum www.sanctuminc.com/pdf/whitepaper_httpresponse.pdf

Information Disclosure

 The Information Disclosure section covers attacks designed to acquire system-specific information about a website. This system-specific information includes the software distribution, version numbers, and patch levels, or the information may contain the location of backup files and temporary files. In most cases, divulging this information is not required to fulfill the needs of the user. Most websites will reveal some data, but it’s best to limit the amount of data whenever possible. The more information about the website an attacker learns, the easier the system becomes to compromise.

Directory Indexing

Automatic directory listing/indexing is a web server function that lists all of the files within a requested directory if the normal base file (index.html/home.html/default.htm) is not present. When a user requests the main page of a website, he normally types in a URL such as http://www.example.com, using the domain name and excluding a specific file. The web server processes this request and searches the document root directory for the default filename and sends this page to the client. If this page is not present, the web server will issue a directory listing and send the output to the client. Essentially, this is equivalent to issuing a “ls” (Unix) or “dir” (Windows) command within this directory and showing the results in HTML form. From an attack and countermeasure perspective, it is important to realize that unintended directory listings may be possible due to software vulnerabilities (discussed next in the example section) combined with a specific web request.

 When a web server reveals a directory’s contents, the listing could contain information not intended for public viewing. Often web administrators rely on “Security Through Obscurity,” assuming that if there are no hyperlinks to these documents, they will not be found, or no one will look for them. The assumption is incorrect. Today’s vulnerability scanners, such as Nikto, can dynamically add additional directories/files to include in their scan based upon data obtained in initial probes. By reviewing the /robots.txt file and/or viewing directory indexing contents, the vulnerability scanner can now interrogate the web server further with this new data. Although potentially harmless, directory indexing could allow an information leak that supplies an attacker with the information     necessary to launch further attacks against the system.

Directory Indexing Example

 The following information could be obtained    based on directory indexing data:

  • Backup files—with extensions such as .bak, .old, or .orig.

  •  Temporary files—these are files that are normally purged from the server but for some reason are still available.

  • Hidden files—with filenames that start with a “.” (period).

  •  Naming conventions—an attacker may be able to identify the composition scheme used by the web site to name directories or files. Example: Admin versus admin, backup versus back-up, and so on.

  •  Enumerate user accounts—personal user accounts on a web server often have home directories named after their user account.

  •  Configuration file contents—these files may contain access control data and have extensions such as .conf, .cfg, or .config.

  •  Script contents—Most web servers allow for executing scripts by either specifying a script location (e.g., /cgi-bin) or by configuring the server to try and execute files based on file permissions (e.g., the execute bit on *nix systems and the use of the Apache XBitHack directive). Due to these options, if directory indexing of cgi-bin contents are allowed, it is possible to download/review the script code if the permissions are incorrect.

 There are three different scenarios where an attacker may be able to retrieve an unintended directory listing/index:

  1.  The web server is mistakenly configured to allow/provide a directory index. Confusion may arise of the net effect when a web administrator is configuring the indexing directives in the configuration file. It is possible to have an undesired result when implementing complex settings, such as wanting to allow directory indexing for a specific sub-directory, while disallowing it on the rest of the server. From the attacker’s perspective, the HTTP request is normal. They request a directory and see if they receive the desired content. They are not concerned with or care “why” the webserver was configured in this manner.

  2.  Some components of the web server allow a directory index even if it is disabled within the configuration file or if an index page is present. This is the only valid “exploit” example scenario for directory indexing. There have been numerous vulnerabilities identified on many web servers that will result in directory indexing if specific HTTP requests are sent.

  3.  Search engines’ cache databases may contain historical data that would include directory indexes from past scans of a specific web    site.

Apache Countermeasures for Directory Indexing

First of all, if directory indexing is not required for some specific purpose, then it should be disabled in the Options directive, as outlined in Chapter 4 . If directory indexing is accidentally enabled, you can implement the following Mod_Security  directive to catch this information in the output data stream. Figure 7.1  shows what a standard directory index web page looks like.

 Figure 7.1. Standard directory index web page. 

[View full size image]

 Web pages that are dynamically created by the directory indexing function will have a title that starts with “Index of /”. We can use this data as a signature and add the following Mod_Security  directives to catch and deny this access to this     data:

SecFilterScanOutput On
SecFilterSelective OUTPUT "\<title\>Index of /"

References

     Directory Indexing Vulnerability Alerts www.securityfocus.com/bid/1063 www.securityfocus.com/bid/6721 www.securityfocus.com/bid/8898

 Nessus “Remote File Access” Plugin web page http://cgi.nessus.org/plugins/dump.php3?family=Remote%20file%20access

 Web Site Indexer Tools www.download-freeware-shareware.com/Internet.php?Theme=112

 Search Engines as a Security Threat http://it.korea.ac.kr/class/2002/software/Reading%20List/Search%20Engines%20a%20a%20Security%20Threat.pdf

The Google Hacker’s Guide http://johnny.ihackstuff.com/security/premium/The_Google_Hackers_Guide_v1.0.pdf

Information Leakage

Information Leakage occurs when a web site     reveals sensitive data, such as developer comments or error messages, which may aid an attacker in exploiting the system. Sensitive information may be present within HTML comments, error messages, source code, or simply left in plain sight. There are many ways a web site can be coaxed into revealing this type of information. While leakage does not necessarily represent a breach in security, it does give an attacker useful guidance for future exploitation. Leakage of sensitive information may carry various levels of risk and should be limited whenever possible.
 In the first case of Information Leakage (comments left in the code, verbose error messages, etc.), the leak may give intelligence to the attacker with contextual information of directory structure, SQL query structure, and the names of key processes used by the web site.
 Often a developer will leave comments in the HTML and script code to help facilitate debugging or integration. This information can range from simple comments detailing how the script works, to, in the worst cases, usernames and passwords used during the testing phase of development.
 Information Leakage also applies to data deemed confidential, which aren’t properly protected by the web site. These data may include account numbers, user identifiers (driver’s license number, passport number, social security numbers, etc.) and user-specific data (account balances, address, and transaction history). Insufficient Authentication, Insufficient Authorization, and secure transport encryption also deal with protecting and enforcing proper controls over access to data. Many attacks fall outside the scope of web site protection, such as client attacks, the “casual observer” concerns. Information Leakage in this context deals with exposure of key user data deemed confidential or secret that should not be exposed in plain view even to the user. Credit card numbers are a prime example of user data that needs to be further protected from exposure or leakage even with the proper encryption and access controls     in place.
 
Information Leakage Example
There are three main     categories of Information Leakage: comments left in code, verbose error messages, and confidential data in plain sight. Comments left in code:
 Here we see a comment left by the development/QA personnel indicating what one should do if the image files do not show up. The security breach is the host name of the server that is mentioned explicitly in the code, “VADER.”
 An example of a verbose error message can be the response to an invalid query. A prominent example is the error message associated with SQL queries. SQL Injection attacks typically require the attacker to have prior knowledge of the structure or format used to create SQL queries on the site. The information leaked by a verbose error message can provide the attacker with crucial information on how to construct valid SQL queries for the backend database. The following was returned when placing an apostrophe into the username field of a login page:
An Error Has Occurred.
Error Message:
System.Data.OleDb.OleDbException: Syntax error (missing
operator) in query expression 'username = ''' and password =
'g''. at
System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling (
Int32 hr) at
System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult
( tagDBPARAMS dbParams, Object& executeResult) at
 In the first error statement, a syntax error is reported. The error message reveals the query parameters that are used in the SQL query: username and password. This leaked information is the missing link for an attacker to begin to construct SQL Injection attacks against the site.
 Confidential data left in plain sight could be files that are placed on a web server with no direct html links pointing to them. Attackers may enumerate these files by either guessing filenames based     on other identified names or perhaps through the use of a local search engine.

Apache Countermeasures for Information Leakage

Preventing Verbose Error Messages
Containing      information leaks such as these requires Apache to inspect the outbound data sent from    the web applications to the client. One way to do this, as we have discussed previously, is to use the OUTPUT filtering capabilities of Mod_Security . We can easily set up a filter to watch for common database error messages being sent to the client and then generate a generic 500 status code instead of the verbose message:
SecFilterScanOutput On
SecFilterSelective OUTPUT "An Error Has Occurred" status:500
 
Preventing Comments in HTML
 While Mod_Security is efficient     at identifying signature patterns, it does have one current shortcoming. Mod_Security cannot manipulate  the data in the transaction. When dealing with information disclosures in HTML comment tags, it would not be appropriate to deny the entire request for a web page due to comment tags. So how can we handle this? There is a really cool feature in the Apache 2.0 version called filters: http://httpd.apache.org/docs-2.0/mod/mod_ext_filter.html . The basic premise of filters is that they read from standard input and print to standard output. This feature becomes intriguing from a security perspective when dealing with this type of information disclosure prevention. First, we use the ExtFilterDefine  directive to set up our output filter. In this directive, we tell Apache that this is an output filter, that the input data will be text, and that we want to use an OS command to act on the data. In this case, we can use the Unix Stream Editor program (sed) to strip out any comment tags. The last step is to use the SetOutputFilter  directive to activate the filter in a LocationMatch directive. We can add the following data to the httpd.conf  file to effectively remove all HTML comment tags, on-the-fly, as they are being sent to the client:
ExtFilterDefine remove_comments mode=output intype=text/html \
cmd="/bin/sed s/\<\!--.*--\>//g"
SetOutputFilter remove_comments
 Pretty slick, huh? Just think, this is merely the tip of the iceberg as far as the potential possibilities for using filters for security     purposes.
 
References
 “Best practices with custom error pages in .Net,” Microsoft Support http://support.microsoft.com/default.aspx?scid=kb;en-us;834452
 “Creating Custom ASP Error Pages,” Microsoft Support http://support.microsoft.com/default.aspx?scid=kb;en-us;224070
 “Apache Custom Error Pages,” Code Style www.codestyle.org/sitemanager/apache/errors-Custom.shtml
 “Customizing the Look of Error Messages in JSP,” DrewFalkman.com www.drewfalkman.com/resources/CustomErrorPages.cfm

Path Traversal

 The Path Traversal attack     technique forces access to files, directories, and commands that potentially reside outside the web document root directory. An attacker may manipulate a URL in such a way that the web site will execute or reveal the contents of arbitrary files anywhere on the web server. Any device that exposes an HTTP-based interface is potentially vulnerable to Path Traversal.
 Most web sites restrict user access to a specific portion of the file-system, typically called the “web document root” or “CGI root” directory. These directories contain the files intended for user access and the executables necessary to drive web application functionality. To access files or execute commands anywhere on the file system, Path Traversal attacks will utilize the ability of special-character sequences.
 The most basic Path Traversal attack uses the “../” special-character sequence to alter the resource location requested in the URL. Although most popular web servers will prevent this technique from escaping the web document root, alternate encodings of the “../” sequence may help bypass the security filters. These method variations include valid and invalid Unicode-encoding (“..%u2216” or “..%c0%af”) of the forward slash character, backslash characters (“..\”) on Windows-based servers, URL-encoded characters (“%2e%2e%2f”), and double URL encoding (“..%255c”) of the backslash character.
 Even if the web server properly restricts Path Traversal attempts in the URL path, a web application itself may still be vulnerable due to improper handling of user-supplied input. This is a common problem of web applications that use template mechanisms or load static text from files. In variations of the attack, the original URL parameter value is substituted with the filename of one of the web application’s dynamic scripts. Consequently, the results can reveal source code because the file is interpreted as text instead of an executable script. These techniques often employ additional special characters such as the dot (“.”) to reveal the listing of the current working directory, or “%00” NUL characters     in order to bypass rudimentary file extension checks.
 
Path Traversal Examples
 
Path Traversal Attacks Against a Web Server
GET /../../../../../some/file HTTP/1.0
GET /..%255c..%255c..%255csome/file HTTP/1.0
GET /..%u2216..%u2216some/file HTTP/1.0
 
Path Traversal Attacks Against a Web Application
Normal: GET /foo.cgi?home=index.htm HTTP/1.0
Attack: GET /foo.cgi?home=foo.cgi HTTP/1.0
    In the previous example, the web application reveals the source code of the foo.cgi file because the value of the home variable was used as content. Notice that in this case, the attacker does not need to submit any invalid characters or any path traversal characters for the attack to succeed. The attacker has targeted another file in the same directory as index.htm.
 
Path Traversal Attacks Against a Web Application Using Special-Character Sequences
Original: GET /scripts/foo.cgi?page=menu.txt HTTP/1.0
Attack: GET /scripts/foo.cgi?page=../scripts/foo.cgi%00txt HTTP/1.0
 In this example, the web application reveals the source code of the foo.cgi file by using special-characters sequences. The “../” sequence was used to traverse one directory above the current and enter the /scripts directory. The “%00” sequence was used both to bypass file extension check and snip off the extension when the file was    read in.
 
Apache Countermeasures for Path Traversal Attacks
 Ensure the user level of the web server or web application      is given the least amount of read permissions possible for files outside of the web document root. This also applies to scripting engines or modules necessary to interpret dynamic pages for the web application. We addressed this step at the end of the CIS Apache Benchmark document when we updated the permissions on the different directories to remove READ permissions.
 Normalize all path references before applying security checks. When the web server decodes path and filenames, it should parse each encoding scheme it encounters before applying security checks on the supplied data and submitting the value to the file access function. Mod_Security  has numerous normalizing checks: URL decoding and removing evasion attempts such as directory self-referencing.
 If filenames will be passed in URL parameters, then use a hard-coded file extension constant to limit access to specific file types. Append this constant to all filenames. Also, make sure to remove all NULL-character (%00) sequences in order to prevent attacks that bypass this type of check. (Some interpreted scripting languages permit NULL characters within a string, even though the underlying operating system truncates strings at the first NULL character.) This prevents directory traversal attacks within the web document root that attempt to view dynamic script files.
 Validate all input so that only the expected character set is accepted (such as alphanumeric). The validation routine should be especially aware of shell meta-characters such as path-related characters (/ and \) and command concatenation characters (&& for Windows shells and semi-colon for Unix shells). Set a hard limit for the length of a user-supplied value. Note that this step should be applied to every parameter passed between the client and server, not just the parameters expected to be modified by the user through text boxes or similar input fields. We can create a Mod_Security  filter for the foo.cgi script to help restrict the type file that may be referenced in the “home” parameter.
SecFilterSelective SCRIPT_FILENAME "/scripts/foo.cgi" chain
SecFilterSelective ARG_home "!^[a-zA-Z].{15,}\.txt"
 This filter will reject all parameters to the “home” argument that is a filename of more than 15 alpha characters and that doesn’t have     a “.txt” extension.
 
References
 “CERT Advisory CA-2001-12 Superfluous Decoding Vulnerability in IIS” www.cert.org/advisories/CA-2001-12.html
 “Novell Groupwise Arbitrary File Retrieval Vulnerability” www.securityfocus.com/bid/3436/info/

Predictable Resource Location

 Predictable Resource Location is an attack     technique used to uncover hidden web site content and functionality. By making educated guesses, the attack is a brute force search looking for content that is not intended for public viewing. Temporary files, backup files, configuration files, and sample files are all examples of potentially leftover files. These brute force searches are easy because hidden files will often have common naming conventions and reside in standard locations. These files may disclose sensitive information about web application internals, database information, passwords, machine names, file paths to other sensitive areas, or possibly contain vulnerabilities. Disclosure of this information is valuable to an attacker. Predictable Resource Location is also known   as Forced Browsing,   File Enumeration, Directory Enumeration , and so forth.
 
Predictable Resource Location Examples
Any attacker     can make arbitrary file or directory requests to any publicly available web server. The existence of a resource can be determined by analyzing the web server HTTP response codes. There are several Predictable Resource Location attack variations.
 
Blind Searches for Common Files and Directories
/admin/
/backup/
/logs/
/vulnerable_file.cgi
 
Adding Extensions to Existing Filename: (/test.asp)
/test.asp.bak
/test.bak
/test
 
Apache Countermeasures for Predictable Resource Location Attacks
To prevent a      successful Predictable Resource Location attack and protect against sensitive file misuse, there are two recommended solutions. First, remove files that are not intended for public viewing from all accessible web server directories. Once these files have been removed, you can create security filters to identify if someone probes for these files. Here are some example Mod_Security filters that would catch this action:
SecFilterSelective REQUEST_URI "^/(scripts|cgi-local|htbin|cgibin
|cgis|win-cgi|cgi-win|bin)/"
SecFilterSelective REQUEST_URI ".*\.(bak|old|orig|backup|c)$"
 These two filters will deny access to both unused, but commonly scanned for, directories and also files with common backup extensions.
 

Logical Attacks

The Logical Attacks section    focuses on the abuse or exploitation of a web application’s logic flow. Application logic is the expected procedural flow used in order to perform a certain action. Password recovery, account registration, auction bidding, and eCommerce purchases are all examples of application logic. A web site may require a user to correctly perform a specific multi-step process to complete a particular action. An attacker may be able to circumvent or misuse these features to harm a web site and its users.
 

Abuse of Functionality

Abuse of Functionality is     an attack technique that uses a web site’s own features and functionality to consume, defraud, or circumvent access control mechanisms. Some functionality of a web site, possibly even security features, may be abused to cause unexpected behavior. When a piece of functionality is open to abuse, an attacker could potentially annoy other users or perhaps defraud the system entirely. The potential and level of abuse will vary from web site to web site and application to application.
 Abuse of Functionality techniques are often intertwined with other categories of web application attacks, such as performing an encoding attack to introduce a query string that turns a web search function into a remote web proxy. Abuse of Functionality attacks are also commonly used as a force multiplier. For example, an attacker can inject a Cross-site Scripting snippet into a web-chat session and then use the built-in broadcast function to propagate the malicious code throughout the site.
 In a broad view, all effective attacks against computer-based systems entail Abuse of Functionality issues. Specifically, this definition describes an attack that has subverted a useful web application for a malicious purpose with little or no modification to the original function.
 
Abuse of Functionality Examples
 Examples of Abuse of Functionality    include
  1.  Using a web site’s search function to access restricted files outside of a web directory.
  2.  Subverting a file upload subsystem to replace critical internal configuration files.
  3.  Performing a DoS by flooding a web-login system with good usernames and bad passwords to lock out legitimate users when the allowed login retry limit is exceeded.
Other real-world examples are described in the following sections.
 
Matt Wright’s FormMail
The PERL-based   web application “FormMail” was normally used to transmit user-supplied form data to a preprogrammed email address. The script offered an easy-to-use solution for web sites to gather feedback. For this reason, the FormMail script was one of the most popular CGI programs online. Unfortunately, this same high degree of utility and ease of use was abused by remote attackers to send email to any remote recipient. In short, this web application was transformed into a spam-relay engine with a single browser web request. An attacker merely has to craft a URL that supplied the desired email parameters and perform an HTTP GET to the CGI, such as the following:
bin/FormMail.pl?recipient=email@victim.example&message=you%20got%20spam
 An email would be dutifully generated, with the web server acting as the sender, allowing the attacker to be fully proxied by the web application. Because no security mechanisms existed for this version of the script, the only viable defensive measure was to rewrite the script with a hard-coded email address. Barring that, site operators were forced to remove or replace the web application entirely.
 
Macromedia’s Cold Fusion
Sometimes basic administrative tools    are embedded within web applications that can be easily used for unintended purposes. For example, Macromedia’s Cold Fusion by default has a built-in module for viewing source code that is universally accessible. Abuse of this module can result in critical web application information leakage. Often these types of modules are not sample files or extraneous functions, but critical system components. This makes disabling these functions problematic since they are tied to existing web application systems.
 
Smartwin CyberOffice Shopping Cart Price Modification
 Abuse of Functionality occurs when an attacker alters data in an unanticipated way in order to modify the behavior of the web application. For example, the   CyberOffice shopping cart can be abused by changing the hidden price field within the web form. The web page is downloaded normally, edited,     and then resubmitted with the prices set to any desired value.
 
Apache Countermeasures for Abuse of Functionality
Prevention of these      kinds of attacks depends largely upon designing web applications with core principles of security. Specifically this entails implementing with the least-privilege principle: web applications should only perform their intended function, on the intended data, for their intended customers, and nothing more. Furthermore, web applications should also verify all user-supplied input to ensure that proper parameters are being passed from the client.
 Many web sites are vulnerable to Abuse of Functionality threats. They rely solely on security through obscurity for protection. We strongly recommended that the functionality and purpose of each web application be clearly documented. This will allow implementers and auditors to quickly identify functions that could be subject to abuse before bringing these systems online.
 With specific regard to Apache, utilizing the CIS Apache Benchmark Scoring Tool will assist with locking down the web server and applying the principle of least privilege by restricting the capabilities of the Apache user account, disabling un-needed modules, and updating permissions on directories and files.
 
References
 “FormMail Real Name/Email Address CGI Variable Spamming Vulnerability” www.securityfocus.com/bid/3955
“CA Unicenter pdmcgi.exe View Arbitrary File” www.osvdb.org/displayvuln.php?osvdb_id=3247
“PeopleSoft PeopleBooks Search CGI Flaw” www.osvdb.org/displayvuln.php?osvdb_id=2815
“iisCART2000 Upload Vulnerability” secunia.com/advisories/8927/
“PROTEGO Security Advisory #PSA200401” www.protego.dk/advisories/200401.html
“Price modification possible in CyberOffice Shopping Cart” http://archives.neohapsis.com/archives/bugtraq/2000-10/0011.html

Denial of Service

Denial of Service (DoS) is an attack technique with the intent of preventing a web site from serving normal user activity. DoS attacks, which are normally applied to the network layer, are also possible at the application layer. These malicious attacks can succeed by starving a system of critical resources, vulnerability exploit, or abuse of functionality.
 Many times, DoS attacks will attempt to consume all of a web site’s available system resources such as CPU, memory, disk space, and so on. When any one of these critical resources reaches full utilization, the web site will normally be inaccessible.
 As today’s web application environments include a web server, database server, and an authentication server, DoS at the application layer may target each of these independent components. Unlike DoS at the network layer, where a large number of connection attempts are required, DoS at the application layer is a much simpler task to perform.
 
DoS Example
For this example, the target      is a healthcare web site that generates a report with medical history. For each report request, the web site queries the database to fetch all records matching a single social security number. Given that hundreds of thousands of records are stored in the database (for all users), the user will need to wait three minutes to get his medical history report. During the three minutes of time, the database server’s CPU reaches 60 percent utilization while searching for matching records.
 A common application layer DoS attack will send 10 simultaneous requests asking to generate a medical history report. These requests will most likely put the web site under a DoS condition as the database server’s CPU will reach 100 percent utilization. At this point, the system will likely be inaccessible to normal user activity.
 There are many different targets for a DoS attack:
  •  DoS targeting a specific user.  An intruder will repeatedly attempt to login to a web site as some user, purposely doing so with an invalid password. This process will eventually lock out the user.
  •  DoS targeting the database server.  An intruder will use SQL injection techniques to modify the database so that the system becomes unusable (e.g., deleting all data, deleting all usernames, and so forth).
  • DoS targeting the web server.  An intruder will use Buffer Overflow techniques to send a specially crafted request that will crash the web server process, causing the system to be inaccessible to normal     user activity.
 
Apache Countermeasures for DoS Attacks
As listed previously      , web-based DoS attacks may take on many forms, as the target of the attack may be focused at different components of the web server or application. In order to mitigate the effects of a DoS attack, we therefore need to implement multiple solutions.
 
DoS Targeting a Specific User
 Apache does not have a built-in capability to lock user accounts due to failed login attempts. This process is normally handled by the authentication application; in this scenario, perhaps the user is being authenticated with credentials that are stored in a database. This means that the lockout procedures would reflect the policies of the database authentication mechanism.
 The best way to approach this with Apache is to rely on the Mod_Dosevasive  settings to identify when an attacker is using automated means to authenticate to numerous accounts. In this attack scenario, we have two different triggers for identification: first are the alerts generated by Mod_Dosevasive  if the attacker sends data over our threshold, and the second are the 401 Unauthorized status code alerts for the failed logins that are generated by the use of CGI scripts. With either of these alerting mechanisms, we could identify the source IP of the attack and implement access control directives to deny further access.
 
DoS Targeting the Database Server
 In order to combat this type of attack, we must implement proper input validation filtering so that an attacker is not able to successfully pass SQL statements within the URL to the back-end database. Please refer to the previous section on SQL Injection for example security filters.
 
DoS Targeting the Web Server
 We previously discussed tuning the configuration of the HTTP connection to help mitigate the effects of a DoS attack with updated settings for KeepAlives, KeepAliveTimeouts , and so on. In addition to   these Apache directives, we also rely on Mod_Dosevasive  to respond to these DoS attacks. As I mentioned in the previous chapter, I have made some updates to the Mod_Dosevasive  code so that I run more efficiently in my environment. An additional technique that I use to lessen the impact of a DoS attack is to change the default      status code returned by Mod_Dosevasive . The default status code is 403 Forbidden. This causes resource consumption issues in my environment since I utilize CGI alerting scripts for the 403 status codes. These scripts will present the attacker with an html page and also email security personnel. The overhead associated with spawning these CGI scripts and calling up sendmail exacerbates the effects of a DoS attack against my site. How can we fix this issue?
 I decided to update the Mod_Dosevasive  code to change the status code, but the question was “What should I change it to?” Preferably, I needed a status code that won’t trigger a CGI script and only returns the HTTP response headers. This lack of a response message body will help to reduce the network consumption. I therefore edited the mod_dosevasive20.c  file and changed all status code entries from HTTP_FORBIDDEN to HTTP_MOVED_TEMPORARILY.
 Besides a resource consumption attack, an attacker may be able to take advantage of a vulnerability with the web server software to cause the web server to hang or crash. A good example of this situation was the Chunked-Encoding Vulnerability from June 2002 (www.cert.org/advisories/CA-2002-17.html ). With this vulnerability, an attacker could send a request that included the “Transfer-Encoding: chunked” header along with payload data that could potentially crash the server or cause code execution. eEye Security released a tool that would automatically check a web server to verify if it was vulnerable: http://eeye.com/html/Research/Tools/apachechunked.html . The resulting HTTP request looked like this:
**************Begin Session****************
POST /EEYE.html HTTP/1.1
Transfer-Encoding: chunked
Content-Length: 22
4
EEYE
7FFFFFFF
[DATA]
**************End Session******************
 Besides updating Apache with the appropriate patch, you could also implement a Mod_Security filter to block all client  requests that submit the Transfer-Encoding header:
SecFilterSelective HTTP_TRANSFER_ENCODING "!^$"
 Besides specific Apache mitigation options, you should monitor your web site’s resources. Isolating different critical resources and simulating DoS scenarios using stress tools is an excellent way to test overall system integrity. When “hot spots” are detected, try to review your design or add more resilient resources. Additional network architecture solutions include server fail-over and threshold-based load sharing,      balancing, or redundancy.
 
References
“CERT Advisory CA-2002-17 Apache Web Server Chunk Handling Vulnerability” www.cert.org/advisories/CA-2002-17.html
“The Attacks on GRC.com” http://grc.com/dos/grcdos.htm

Insufficient Anti-Automation

Insufficient Anti-Automation occurs     when a web site permits an attacker to automate a process that should only be performed manually. Certain web site functionalities should be protected against automated attacks.
 Left unchecked, automated robots (programs) or attackers could repeatedly exercise web site functionality attempting to exploit or defraud the system. An automated robot     could potentially execute thousands of requests a minute, causing potential loss of performance or service.
 
Insufficient Anti-Automation Example
 An automated robot should not be able to sign up 10,000 new accounts in a few minutes. Similarly, automated robots should not be able to annoy other users with repeated message board postings. These operations should be limited only to human usage.
 
Apache Countermeasures for Insufficient Anti-Automation
 There are a few solutions that      have been used in the past to determine if a web request is from a person or a robot, but the most telling characteristic is the speed of the requests. Therefore, the best mitigation option for Apache is to leverage Mod_Dosevasive to monitor the connection thresholds.
 
References
 “Telling Humans Apart (Automatically)” www.captcha.net/
 “Ravaged by Robots!” By Randal L. Schwartz www.webtechniques.com/archives/2001/12/perl/
 “.Net Components Make Visual Verification Easier” By JingDong (Jordan) Zhang http://go.cadwire.net/?3870,3,1
“Vorras Antibot” www.vorras.com/products/antibot/
“Inaccessibility of Visually-Oriented Anti-Robot Tests” www.w3.org/TR/2003/WD-turingtest-20031105/
 

Insufficient Process Validation

Insufficient Process Validation occurs     when a web site permits an attacker to bypass or circumvent the intended flow control of an application. If the user state through a process is not verified and enforced, the web site could be vulnerable to exploitation or fraud.
 When a user performs a certain web site function, the application may expect the user to navigate through a specific order sequence. If the user performs certain steps incorrectly or out of order, a data integrity error occurs. Examples of multi-step processes include wire transfer, password recovery, purchase checkout, account signup, and so on. These processes will likely require certain steps to be performed as expected.
 For multi-step processes to function properly, web sites are required to maintain user state as the user traverses the process flow. Web sites will normally track a user’s state through the use of cookies or hidden HTML form fields. However, when tracking is stored on the client side within the web browser, the integrity of the data must be verified. If not, an attacker may be able to circumvent the expected traffic flow by altering the current state.
 
Insufficient Process Validation Example
An online shopping cart     system may offer to the user a discount if product A is purchased. The user may not want to purchase product A, but product B. By filling the shopping cart with product A and product B, and entering the checkout process, the user obtains the discount. The user then backs out of the checkout process, and removes product A, or simply alters the values before submitting to the next step. The user then reenters the checkout process, keeping the discount already given in the previous checkout process with product A in the shopping cart, and obtains a fraudulent purchase price.
 
Apache Countermeasures for Insufficient Process Validation
 A term commonly used in       these scenarios is Forceful Browsing, which is a technique used by attackers when they attempt to access URLs in an order that is unexpected by the application. These types of logical attacks are the most difficult for Apache to address, as it does not have the knowledge of the expected process flow of the application. The best way to approach this is to document the desired application flow and then implement various Mod_Security  filters to verify that the client came from the correct URL when they access the current URL. For instance, say that you have a login page and then a page for resetting your account password. You could implement Mod_Security  filter like this:
SecFilterSelective SCRIPT_FILENAME "/account/passwd.php" chain
SecFilterSelctive HTTP_REFERER "!/account/login.php"
 Another possible process flow validation would be to use Mod_Security  to verify portions of a session ID or cookie. If your application sets or updates the session ID in response to certain actions, you could possibly validate portions of the cookie. For instance, say that your application sets this cookie when a client is attempting to update their account information:
Code View: Scroll / Show All
Set-Cookie:
Account=pCqny0PnAkGv22QSIZUIHfF5PHIvsai1W03%2BfrKhJxgyJsKalgubbMBrwkI%3D%3DG2G3%0D;
path=/account/update.php; expires=Fri, 06-May-2005 09:11:43 GMT
The cookie includes the “path=” parameter. We can implement some Mod_Security  filters to verify that the path parameter is reflecting the proper locations during certain application functions.
SecFilterSelective SCRIPT_FILENAME "/account/passwd.php" chain
SecFilterSelective COOKIE_Account "!path\=/account/update\.php"
 These directives will redirect a client back to the login process if the path parameter in the Account     cookie is not set appropriately.
 
References
 “Dos and Don’ts of Client Authentication on the Web” By Kevin Fu, Emil Sit, Kendra Smith, Nick Feamster—MIT Laboratory for Computer Science http://cookies.lcs.mit.edu/pubs/webauth:tr.pdf

Identifying Probes and Blocking Well-Known Offenders

 During the initial reconnaissance phase of most web attacks, the attackers will need to interact with the web server or application to gather information. They can then use this information to better plan for the actual exploit    scenario. While these requests are not the actual exploits themselves, they are still a critical piece of the puzzle for an attacker. This is why we, as web security practitioners, need to pay close attention to the initial probe requests sent to our servers, as they are often omens of the attack to come.
 

Worm Probes

 The use of worm programs to automatically    scan and compromise web servers has been growing over the last few years. First, there were worms such as Sadmind, CodeRed, and NIMDA. More recently, web application worms such as phpBB/Santy and the Awstats worms have been seen. While the attack vectors for these worms will differ, there is one pretty universal characteristic that all of these web malware specimens share. They almost all propagate to new targets based on IP addresses and not by domain names. Most worms are coded to scan certain network ranges for new targets that are listening on port 80. As an example of this characteristic, take a look at the following Mod_Security Audit_Log  entry. This entry shows the XML RPC Worm that attempted to exploit my web server.
Code View: Scroll / Show All
========================================
Request: 66.38.145.65 - - [08/Nov/2005:18:58:34 --0500] "POST /xmlrpc.php HTTP/1.1"
403 74
3
Handler: cgi-script
----------------------------------------
POST /xmlrpc.php HTTP/1.1
Host: 192.168.1.100
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)
Content-Type: text/xml
Content-Length: 269
mod_security-message: Access denied with code 403. Pattern match "^$|!hostname.com" at
HTTP_HOST
mod_security-action: 403
269