diff --git a/source b/source index a58b64633be..184afa375aa 100644 --- a/source +++ b/source @@ -6625,7 +6625,7 @@ a.setAttribute('href', 'http://example.com/'); // change the content attribute d spec=MIMESNIFF>
The sniffed type of a + data-x-href="https://mimesniff.spec.whatwg.org/#computed-mime-type">computed type of a resource must be found in a manner consistent with the requirements given in the MIME Sniffing specification for finding the sniffed media type of the relevant sequence of octets.
@@ -11785,7 +11785,8 @@ gave me some of the songs they wrote. I love sharing my music.</p> Otherwise, if the resource is expected to be an image, user agents may apply the image sniffing rules, with the official type being the type determined from the resource's Content-Type - metadata, and use the resulting sniffed type of the resource as if it was the actual type. + metadata, and use the resulting computed type of the + resource as if it was the actual type. Otherwise, if neither of these conditions apply or if the user agent opts not to apply the image sniffing rules, then the user agent must use the resource's Content-Type metadata to determine the type of the resource. If there @@ -27841,7 +27842,7 @@ attribute, set the browsing context name of the element's nes specified in thattype
attribute.
Otherwise, let tentative type be the sniffed type of the resource.
+ sniffing">computed type of the resource. @@ -34616,13 +34617,13 @@ interface MediaController : EventTarget {The tasks queued by the fetching algorithm on the networking task source to process the data as it is being - fetched must determine the type of - the resource. If the type of the resource is not a supported text + fetched must determine the type of + the resource. If the type of the resource is not a supported text track format, the load will fail, as described below. Otherwise, the resource's data must be passed to the appropriate parser (e.g., the WebVTT parser) as it is received, with the text track list of cues being used for that parser's output.
+ also critical block below, and the word "computed" in the paragraph after that -->The appropriate parser will incrementally update the text track list of cues during these networking task @@ -34646,7 +34647,7 @@ interface MediaController : EventTarget { data-x="concept-task">task must use the DOM manipulation task source.
-If fetching does not fail, but the type of the resource is not a supported +
If fetching does not fail, but the type of the resource is not a supported text track format, or the file was not successfully processed (e.g., the format in question is an XML format and the file contained a well-formedness error that the XML specification requires be detected and reported to the application), then the task @@ -37410,9 +37411,9 @@ dictionary TrackEventInit : EventInit {
If the nested browsing context's active document was created by
the page load processing model for XML files section because
- the sniffed type of the resource in the navigate algorithm was
- image/svg+xml
, then return that Document
object and abort these
- steps.
image/svg+xml
, then return that
+ Document
object and abort these steps.
Otherwise, return null.
Let type be the sniffed type of
+ Let type be the computed type of
the resource. If the user agent has been configured to process resources of the given type using some mechanism other than rendering the content in a browsing
@@ -80848,7 +80849,7 @@ State: <OUTPUT NAME=I>1</OUTPUT> <INPUT VALUE="Increment" TYPE=BUTTON O
The input byte stream converts bytes into characters for use in the
tokenizer. This process relies, in part, on character encoding
information found in the real Content-Type metadata of the
- resource; the "sniffed type" is not used for this purpose. When a plain text document is to be loaded in a browsing context, the user agent
must queue a task to create a Document
object, mark it as being an HTML document, set its content type to the sniffed MIME type of the
- resource (type in the navigate algorithm), initialise the
- Document
object, create an HTML parser, associate it with the
- Document
, act as if the tokenizer had emitted a start tag token with the tag name
- "pre" followed by a single U+000A LINE FEED (LF) character, and switch the HTML parser's
- tokenizer to the PLAINTEXT state. Each task that
- the networking task source places on the task queue while fetching runs
- must then fill the parser's input byte stream with the fetched bytes and cause the
- HTML parser to perform the appropriate processing of the input stream.Document
object,
+ create an HTML parser, associate it with the Document
, act as if the
+ tokenizer had emitted a start tag token with the tag name "pre" followed by a single U+000A LINE
+ FEED (LF) character, and switch the HTML parser's tokenizer to the PLAINTEXT
+ state. Each task that the networking task
+ source places on the task queue while fetching runs must then fill the
+ parser's input byte stream with the fetched bytes and cause the HTML
+ parser to perform the appropriate processing of the input stream.
The rules for how to convert the bytes of the plain text document into actual characters, and the rules for actually rendering the text to the user, are defined by the specifications for the - sniffed MIME type of the resource (type in the navigate algorithm).
+ computed MIME type of the resource (type + in the navigate algorithm).The document's character encoding must be set to the character encoding used to decode the document.
@@ -81000,12 +81003,13 @@ State: <OUTPUT NAME=I>1</OUTPUT> <INPUT VALUE="Increment" TYPE=BUTTON OWhen an image, video, or audio resource is to be loaded in a browsing context, the
user agent should create a Document
object, mark it as being an HTML document, set its content
- type to the sniffed MIME type of the resource (type in the
- navigate algorithm), initialise the Document
object, append
- an html
element to the Document
, append a head
element and
- a body
element to the html
element, append an element host element for the media, as described below, to the body
element,
- and set the appropriate attribute of the element host element, as described
- below, to the address of the image, video, or audio resource.
+ Document
object, append an html
element to the Document
,
+ append a head
element and a body
element to the html
+ element, append an element host element for the media, as described below, to the
+ body
element, and set the appropriate attribute of the element host
+ element, as described below, to the address of the image, video, or audio resource.
The element host element to create for the media is the element given in
the table below in the second cell of the row whose first cell describes the media. The
@@ -81051,12 +81055,13 @@ State: <OUTPUT NAME=I>1</OUTPUT> <INPUT VALUE="Increment" TYPE=BUTTON O
browsing context, the user agent should create a Document
object, mark
it as being an HTML document and mark it as being a
plugin document, set its content
- type to the sniffed MIME type of the resource (type in the
- navigate algorithm), initialise the Document
object, append
- an html
element to the Document
, append a head
element and
- a body
element to the html
element, append an embed
to the
- body
element, and set the src
attribute of the
- embed
element to the address of the resource.
Document
object, append an html
element to the
+ Document
, append a head
element and a body
element to the
+ html
element, append an embed
to the body
element, and
+ set the src
attribute of the embed
element to
+ the address of the resource.
The term plugin document is used by
Content Security Policy as part of the mechanism that ensures iframe
s