Sunday, December 27, 2009

Google Chrome/ WebKit - MSWord Scripting Object XSS Payload Execution Bug and Random CLSID Stringency

Google Chrome (including customized webkit)has shown unethical behavior in implementing an embedded object with CLSID parameter. The design bug is presented in the execution of the object element directly in the context of browser. The bug proliferates when a CLSID of certain object is passed and specific URL is allowed to execute as parameter value in it. Before jumping into all aspect of this unexpected and chaotic behavior , let's have a brief look at the W3 specification

!ELEMENT OBJECT - - (PARAM | %flow;)*
-- generic embedded object -->
%attrs; -- %coreattrs, %i18n, %events --
declare (declare) #IMPLIED -- declare but don't instantiate flag --
classid %URI; #IMPLIED -- identifies an implementation --
codebase %URI; #IMPLIED -- base URI for classid, data, archive--
data %URI; #IMPLIED -- reference to object's data --
type %ContentType; #IMPLIED -- content type for data --
codetype %ContentType; #IMPLIED -- content type for code --
archive CDATA #IMPLIED -- space-separated list of URIs --
standby %Text; #IMPLIED -- message to show while loading --
height %Length; #IMPLIED -- override height --
width %Length; #IMPLIED -- override width --
usemap %URI; #IMPLIED -- use client-side image map --
name CDATA #IMPLIED -- submit as part of form --
tabindex NUMBER #IMPLIED -- position in tabbing order --

classid = uri [CT]
This attribute may be used to specify the location of an object's implementation via a URI. It may be used together with, or as an alternative to the data attribute, depending on the type of object involved.

data = uri [CT]
This attribute may be used to specify the location of the object's data, for instance image data for objects defining images, or more generally, a serialized form of an object which can be used to recreate it. If given as a relative URI, it should be interpreted relative to the codebase attribute.

So as per the recommendations codebase matters a lot. The value should work according to the included object which is known by the CLSID. That's true in the implementation of CLSID parameter through embedded object.

The code that executes positively is mentioned below:
[OBJECT classid=clsid:ae24fdae-03c6-11d1-8b76-0080c744f389>
[param name=url

Certain facts are mentioned below

1. The CLSID parameter presented in this part is of MSWORD Scripting Object. The good part is that this code does not get executed in the Internet Explorer 8 and there is no XSS payload execution.

2. All the other browsers such as Mozilla Firefox , Opera and Safari does not execute
this set of payload too. The Safari, which also implements webkit at prime scale does not show any contradictory behavior in this regard.

3. If we talk about HTML5 specification , this is completely unethical in saying that the Google Chrome implements HTML5 then this kind of behavior is accepted. In concern to that latest version of Safari 4 also implements HTML5 specification to a great extent but this execution behavior is not supported.

The contradiction arises as:

1. Google Chrome, itself based on the Webkit and to best of the knowledge , Active X is not supported by the Webkit and Linux platforms. Its a pure windows object class identifiers.

"ActiveX is only supported by Internet Explorer (and browsers built on top of Internet Explorer) on Windows. Google Chrome, Mozilla Firefox, Apple Safari, and others do not support ActiveX. Instead, these browsers make use of the Netscape Plugin Application Programming Interface (NPAPI)."


But the general functionality of DOM object execution is based on top to bottom approach i.e tree notation. Primarily the element at the top executes first and then so on.

2. Google Chrome executes the payload in a same manner( which can be used for XSS extensively) with or without the CLSID parameter. This is contradictory in its own sense. One cannot say in any specific nature of browsers that XSS payload execution with or without the CLSID is the same. Its not the appropriate functional part. As the code base point is mentioned in the W3 specification. The URI points to the object location. Ofcourse!

Note: If the browser base is not supporting any type of specific tag attributes the inline code present in it should not be executed. One cannot say that the browser does not recognize the CLSID and it passes the control to the inline object parameter and executed the URI which is completely against the part as the URI is itself defined for that object.

On the second part code execution without the CLSID is generic , in no way it is similar the payload execution with CLSID.

The overall picture of this kind of issue with respect to other browsers is presented below

This represents the overall scenario. The payload can be used to execute XSS attacks stringently. the best probable solution is not to allow code when executed with CLSID as presented in this talk.

On a simpler talk with Google Chrome team about this against the turf behavior there are certain responses which are unacceptable in any case: Have a look

"There is a special case for the "data", "movie" and "src" attributes: in "isURLAttribute" and "addSubresourceAttributeURLs".

I expect this has to do with our DNS prefetching; we attempt to start downloading
stuff as soon as we know about it. It may be that Chrome special cases this type of
PARAM, expecting it to be a URL. When it finds out there is nothing to grab off the
internet, it is handled like any other URL and the javascript is executed. The code
may need a bit of tweaking to prevent it from executing javascript; it should only
start download the resource if it contains a valid URL."

"The DNS preresolution would, at the most, do a resolution of a domain, but would never trigger any content fetch or JS execution.

There is also some scanning of content, and pre-fetching expected content. I'd be VERY surprised to hear that it leads to execution prior to such necessity."

"I am actually really curious as to why Chrome is behaving this way, even for unknown clsids. I am guessing it is some sort of a heuristic prefetching mechanism that triggers on parameters named "url"?

If my guess is correct, it would be good to have a peek at this mechanism, and limit
it to http / https, just so that it does not introduce problems elsewhere. That said, I do not see any obvious way how the current behavior would have a negative impact on common web sites - i.e., why we should treat it as a security problem."

"I agree with previous assessment that this is not a particular security issue.I also agree that it would be good to understand the behaviour. Hence: It looks to be WebKit simply passing plugin payload URLs to the frame loader, verbatim.This simply means that in Chrome, the following two URLs constructs behave similarly:

1)[object][param name="url" value="javascript:alert(document.domain)">

2)[iframe src="javascript:alert(document.domain)"][/iframe]And obviously, it is any given website's responsibility to NOT pass arbitrary attacker-supplied URLs in either of those attributes."

This statement "it is any given website's responsibility to NOT pass arbitrary attacker-supplied URLs in either of those attributes." is completely obscure with
respect to this bug.

Security Concern: The differential set of payloads always favor the XSS execution and browser inabilities to follow the standard benchmarks.

The result is nothing and no output is on the way. The more stress is not to consider it as a security bug rather finding the real obscurity in it but one can enjoy with this part.

It is seriously out of the way.


Thursday, December 24, 2009

Google Sites Privacy Chaos - Is it unthical or Is this the way it has to be? A Talk!

Google sites provide services to the users for hosting their websites on Google's domain. I was going through the privacy column of this website and a stringent issue regrading content policy came in front of me. The policy is presented below:

There is an excerpt in this privacy policy of Google Sites

You may permanently delete any content you create in Google Sites. Because of the way we maintain this service, residual copies of your files and other information associated with your account may remain on our servers for three weeks.

This is completely not true. The policy point is quite okay but considering the real time functionality this policy is not applicable. The time period for residual copies is set for three weeks , I suppose not more than a month. I personally tested the stuff six months back. I have noticed even after six months , the deleted file (a PDF file which I do not want anybody to look into) is still recoverable from the Google site , a quite unacceptable fact. According to the policy, the deleted content should not reside on Google's server for more than three weeks.
Let's see:

So there is an ambiguity in the applied policy of Google sites. Is the policy being implemented in right way?

Ofcourse , Google owns web!

Google Translate - Google User Content - File Uploading Cross - XSS and Design Stringency - A Talk

Google translate services provide an efficient way of translating content. The web is a playground for attackers and everyday new bug or flaw is detected in the web services provided by major giants. An interesting concept is to dissect the web based design of websites handling user generated content. On discussion with Google about this problem , the issues is treated as design by default.

The problem (or web bug) persists in the file uploading feature on Google translate website Malicious content such as XSS payload , Iframe, etc. gets executed and rendered into the context of the running website. On discussion with Google it was stated that:

"With JavaScript is executed on the domain,rather than This is by design as files uploaded to the translate service are regarded as untrusted content."

There are two features provided by Google translate service which are mentioned below
1. Translation through file uploading.
2. Direct translation of content online.

Question: Why users consider translation services as secure? What If somebody is doing some monetary transaction or some other issues like that?

The question and answer in itself is hard to answer. But one thing is sure for any critical work, the translate services should not be used.

Let's have a look at the attack point:

Step 1: Uploading a malicious content file through Google Translate service

Step 2: Executing Content

Another layout

Looking at the different domains





Both the and serves the same google search functionality. The specific user content server can be used for differential purposes because content on it is not trusted.

Looking for the different perspective.It would be great if a small message is being displayed on the Google translate service bar as mentioned below

"Google does not assure the integrity of the source of the content"

After considering this as a notification, I checked the Bing translation which already have applied this notification message. Great.

May be its not a solution but a good step in visualizing your concern about content is a better design practice.

Note: a previously reported phishing vulnerability in Google translation was patched and a check was introduced by Google on the source and destination translation languages.

Saturday, December 19, 2009

Yahoo Babelfish - Possible Frame Injection Attack - Design Stringency

Yahoo Babel-fish online is a service for translating content to different languages. The stringent design bug leads to the possibility of conducting FRAME injection attacks in the context of yahoo domain there by resulting in third-party attacks. The issues has been demonstrated in some of my recent conferences. The flaw can be summed up as:

1. There is no referrer check on the origin i.e. the source of request.
2. Direct links can be used to send requests.
2. Iframes can be loaded directly into the context of domain.

Points to ponder:
1. Yahoo login Page – perform certain checks , authorized ones.
2. Yahoo implements FRAME Bursting in the main login Page.

It is possible to remove that small piece of code and design a similar page with same elements that can be used further. It is possible to impersonate the trust of primary domain (YAHOO in this case) for legitimate attacks. There is a possibility of different attacks on YAHOO users.

Note: there is no specific notification is displayed on the top of a translated page.

Attacker can conduct a FRAME attack by following below mentioned steps

1. Remove the above stated entities code from the main Login Page.
2. Design the fake domain. Load in the context of Yahoo domain
3. Inline IFRAME provides a familiar fake Login page.
4. Set the backdoor in the Login input boxes for stealing credentials.
5. Trap the victims by diversifying the manipulated URL’s on the Web.One can use
dedicated spamming.
6. The attack is all set to work.

Step 1: Injecting IFRAME - Modified

Step 2 – Stealing Credentials


This attack works successfully. This is a demo setup.You can try some credentials and try to login. :)

Thursday, August 06, 2009

FTP Anonymous Services - User Enumeration and Reconnaisance

The security is termed to be as a closed asset for any organization. It has been noticed in recent times that many of the business vendor allows certain anonymous access to the services running on their server. The concern of this post is not restricted to one part but looking at the diversified impact. Apparently the issue seems small but the resultant impact is high. Anything with a default or anonymous access is potentially critical. For example:- the most common issue is FTP open access. Many of the organizations allow anonymous access without understanding the consequences that may hamper the normal functioning.

There are certain facts:

1. A vendor has to restrict the open services.
2. A vendor has to provide a standard access to the clients even for the simple download. Now days, it is not considered as an appropriate solution for providing open access to services. Even for the business perspective restricted access should be taken into consideration. Why open FTP? Why not a credential based access?
3. If the services has to be given then scrutinize the deployment strategy whether it has to be applied at internet or intranet.
4. Why not to put these services on VPN considering the business need.
5. The configuration against these deployed services. Why not to use the organization specific policy based password for FTP access. Why anonymous?
6. Open services are tactically exploited to gain information and reconnaissance.
7. These can be used to scan third party targets too.

Question: Is Security a Prime Target or Business?
Answer: Individualistic and Organizational Decision. Diversified impacts.

Let's consider a case and a risk emanating from it. For example - an organization is providing an open access to FTP services. We will be considering specific functions from security point of view:

1. Passive Mode
2. Glob() Global

"Most FTP daemon implementations provide server-side globbing functionality that performs pattern expansion on these pathnames. The actual glob() implementation is often located in the FTP daemon itself,though some FTP servers use an underlying libc implementation."

"glob - Toggle file name globbing. When file name globbing is enabled, ftp expands csh(1) metacharacters in file and directory names. These characters are *, ?, [, ], ~, {, and }. The server host expands remote file and directory names. Globbing metacharacters are always expanded for the ls and dir commands. If globbing is enabled, metacharacters are also expanded for the multiple-file commands mdelete, mdir, mget, mls, and mput."

If an FTP server provides anonymous access with a passive mode on are more vulnerable
toFTP Bounce Attacks.

Glob() function can be tested against number of buffer overflow issues. The ability of a remote or local user to deliver input patterns to glob() implementations allows
risk of exploitation once the vulnerability is exploited.

Let;s have a look at the real world scenario : Analysis of one software company. A complete thought oriented and for knowledge purposes.

Administrator@TopGun ~
$ ftp
Connected to
220 xxxx. FTP services
Name ( anonymous
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> passive
Passive mode on.
ftp> debug
Debugging on (debug=1).
ftp> glob
Globbing off.
ftp> glob on
Globbing on.

ftp> dir
---> PASV
227 Entering Passive Mode (216,220,63,213,73,192)
---> LIST
150 Here comes the directory listing.
-rw-rw-r-- 1 501 501 148181 Feb 07 2008 BMO and xxxx.pdf
drwxrwxr-x 2 501 501 4096 Jun 23 19:08 CVS
lrwxrwxrwx 1 501 501 33 Dec 02 2008 ReleaseNotes_xxxx5.pdf -> ../pdfs/ReleaseNotes_up.time5.p
lrwxrwxrwx 1 501 501 37 Dec 02 2008 ReleaseNotes_xxxx5_SP1.pdf -> ../pdfs/ReleaseNotes_up.tim

So its easy to look at the rights configured for different user groups.

Administrator@TopGun /cygdrive/c/scripts
$ perl
:: connected to
>> 220 xxxx. FTP services
:: logging into server as anonymous.
>> 331 Please specify the password.
>> 230 Login successful.
>> 227 Entering Passive Mode (216,220,63,213,89,62)
:: server ready for passive attack
:: sampling passive port selection
:: passive connection rate = 6259.7/sec
:: passive command latency = 0.4 seconds
:: starting the reaper engine

:: starting port 17200

Based on one of my designed script , lets analyze the reaped information
Administrator@TopGun /cygdrive/c/my_tools
$ perl - ftp based system user reconnaisance
written by- 0kn0ck [at]

(*) resolving the generic address for domain:

(*) detecting nameservers for the domain :

(*) trying anonymous access on -
(*) anonymous access allowed -
(*) does not support TLS

(*) trying to enumerate the configured system accounts on -

[conn str - 0] - [temp] is not a standard system configured user
[conn str - 1] - [root] is a standard system configured user
[conn str - 2] - [bin] is a standard system configured user
[conn str - 3] - [daemon] is a standard system configured user
[conn str - 4] - [adm] is a standard system configured user
[conn str - 5] - [lp] is a standard system configured user
[conn str - 6] - [sync] is a standard system configured user
[conn str - 7] - [shutdown] is a standard system configured user
[conn str - 8] - [halt] is a standard system configured user
[conn str - 9] - [mail] is a standard system configured user
[conn str - 10] - [news] is a standard system configured user
[conn str - 11] - [uucp] is a standard system configured user
[conn str - 12] - [operator] is a standard system configured user
[conn str - 13] - [games] is a standard system configured user
[conn str - 14] - [gopher] is not a standard system configured user
[conn str - 16] - [apache] is not a standard system configured user
[conn str - 17] - [named] is not a standard system configured user
[conn str - 18] - [amanda] is not a standard system configured user
[conn str - 19] - [indent] is not a standard system configured user
[conn str - 20] - [rpc] is not a standard system configured user
[conn str - 21] - [wnn] is not a standard system configured user
[conn str - 22] - [xfs] is not a standard system configured user
[conn str - 23] - [pvm] is not a standard system configured user
[conn str - 24] - [ldap] is not a standard system configured user
[conn str - 25] - [mysql] is not a standard system configured user
[conn str - 26] - [rpcuser] is not a standard system configured user
[conn str - 27] - [nsf] is not a standard system configured user
[conn str - 28] - [nobody] is a standard system configured user
[conn str - 29] - [junkbust] is not a standard system configured user
[conn str - 30] - [gdm] is not a standard system configured user
[conn str - 31] - [squid] is not a standard system configured user
[conn str - 32] - [nscd] is not a standard system configured user
[conn str - 33] - [rpm] is not a standard system configured user
[conn str - 34] - [mailman] is not a standard system configured user
[conn str - 35] - [radvd] is not a standard system configured user
(*) command completed successfully

The only point in presenting these facts with an example is to show the risks posed
and the impact on security.

At last : Why not a mature business with hardened security?

Monday, May 11, 2009

Gmail/Google Doc PDF Repurposing Integrated Attacks - Cookie Hijacking / Stealing

Google docs network was vulnerable to PDF repurposing attacks. The vulnerability was disclosed to Google responsibly. This was done to mitigate the risk. Google had worked over it and patched it with in a period of 5 days.

The Google doc has been refined now and the integrated support for adobe plugin is removed. The user security was the prime issue because millions of user were at risk if this attack persisted in the open environment. Integrated accounts were more susceptible as certain stolen credentials could be used to access accounts.

The advisory is released here:

Enjoy !

Saturday, May 02, 2009

Troopers 09 Security Conference

The troopers security conference is the one of the finest conference I have been to. Its very nice to have such conference in the heart of Germany. a great technical content and nice crew to discuss things and hang around :). I gave a talk on "Browser Design Flaws". There were some good talks around rootkits , malware for business purposes and web application firewall stuff. All talks were good and it was a great learning environment. Visit :Troopers09

Personally I liked the Packet Wars Hacking Competition by Bryan. It was nicely organized. You can look at the stuff at : Packet Wars Good hacking games to enjoy.

If you miss the fun you can have a look at the snaps here : Troopers09 fun


Thursday, March 12, 2009

Evading Web XSS Filters through Word (Microsoft Office and Open Office) in Enterprise Web Applications

This paper sheds light on the hyper linking issues observed during penetration testing of web based enterprise applications. This concept can be used to bypass standard XSS filters by creating a malicious Microsoft word document.

Download the Paper at : HERE

Enjoy !