Skip to content


Eduardo' Vela" Nava (sirdarckcat) edited this page Mar 20, 2019 · 4 revisions


Defenses for this attack are split in two main categories:

  1. Protecting documents
  2. Protecting subresources

The difference between these is that documents are meant to be rendered by the browser as HTML (eg, in a tab, popup, or iframe), and subresources are meant to be loaded as a subresource inside another document (eg, in a script, image, or an XHR request). There are some leaks that apply only to subresources, some that apply only to documents and a few that apply to both.

Defenses for both of these come in two categories as well:

  1. Differentiate attacks from normal user behavior
  2. Limit information leaks to the minimum possible

Both are imperfect, as the nature of the leaks is inherent to the way the web works, but together they can make attacks significantly harder to pull off.

Protecting documents

The main challenge we have with documents, is that they are either:

  • Displayed in the URL bar for the user (and the user might want to bookmark them)
  • Displayed as an iframe within some other document (and hence, require some form of embedding)
  • Displayed as a pop-up within a third-party site (and require some form of cross-window communication)

As such, the defense against a given page will vary depending on the way they are used, but the ideal situation would be all documents that render user data use SameSite=strict cookies, and use X-Frame-Options: DENY. There will be exceptions, and those exceptions should be mitigated somehow as to limit the leaks to the minimum possible.


Unless a website needs to be inside an iframe, using X-Frame-Options: DENY will severely limit the amount of information leaks on a website. Most sites probably are already using X-Frame-Options to defend against clickjacking, but it also helps to defend against XSLeaks.

If a website has to be embedded in other sites (ads, widgets, and so on), then the information contained inside it should be as constant and predictable as possible. For example, if the iframe displays the display picture of the user, it might be possible for the attacker to discover the identity of the user, by checking if the browser cached the image of the user (so any subresources that might leak the identity of the user shouldn't be cached, for example).

SameSite=strict cookies

Strict same-site cookies are useful to differentiate between requests generated by the user manually typing a URL, clicking a bookmark, or clicking on a link from the current site vs. requests initiated by other websites. In other words, if an attacker is trying to perform an attack against a website, the attacking site usually has to somehow have initiated the navigation to that page, and because of the way SameSite=strict cookies work, they won't be included in requests initiated by an attacker.

To be able to deploy this defense, web developers have to do two things:

  1. Create a new cookie that is unguessable and tied to the user session, with the SameSite=strict flag.
  2. Enumerate all pages in their domain that don't need to be linked to from third-party websites.

The reason not to reuse existing session cookies is because many applications will expect to be linked to from other sites (a "same-site" page by definition of the spec is any subdomain of the current TLD, for example is the same site as, but is not), and if the cookies are marked as SameSite=strict, then users will have a broken user experience where they are forced to login again just because they were on a different site beforehand.

An easy way to identify pages that are linked to by third-party sites is by checking the logs for HTTP Referrers, and any pages with only empty referrers or same-site referrers should be safe to require SameSite=strict cookies.

Note that the behavior on what to do when the SameSite=strict cookie is not present is important, specially because there will be some cases when it will trigger even if no attack is taking place, for example:

  • The cookie will not be present if the user bookmarked the page in a third-party website (instead of using the browser's bookmarks)
  • The cookie will not be present if it is linked to from a news article, blog post, reddit, hackernews, slashdot, etc.
  • The cookie will not be present if it is navigated from a part of the application or site that might be hosted in another domain (eg, integrations with OAuth, PayPal, etc).

Note: There are a few gotchas that you need to keep in mind:

  1. It is important to prevent browsers from caching the document with the SameSite=strict cookie set. You can do this without negatively impacting performance by using Vary: Cookie.
  2. This has to be checked server-side, since document.cookie always includes SameSite=strict cookies, even if the request was cross-site (they are just not included in the HTTP request, but the DOM API will include them).


Hopefully by first checking for HTTP Referrers, the exceptions will be rare, but since some of these exceptions will be out of the control of the developer, care must be taken on what is the best user experience without leaving the page vulnerable to attack. Here are some examples on things that could be done to mitigate the attack:

  1. Ask the user to "confirm" their action - for example, by showing a button that requests the user to repeat or to confirm the action they are about to take.
  2. Ask the user to click on a button to reload the page (note that this shouldn't be done without user interaction!)

By asking the user for confirmation, the attack will be slowed down by the interaction by the user. As such, SameSite=strict cookies are one of the best tools we have to differentiate user behavior from attacks, and requiring user interaction would significantly limit the speed of the attack, and the amount of information leaked.

Protecting subresources

There are two types of subresources, authenticated and unauthenticated subresources.

Authenticated subresources relevant for XSLeaks are usually XHR requests to API endpoints, and they can usually be protected by requiring SameSite=strict cookies, or by protecting them with CSRF tokens.

Unauthenticated subresources usually are only relevant for XSLeaks when proving whether they got cached or not when visiting an authenticated page, or whether an authenticated subresource redirected to them (eg, redirecting to, however in those cases, it is usually more effective to protect the authenticated resource.