If you've been writing JS for any length of time, odds are the syntax is pretty familiar to you. There are certainly many quirks, but overall it's a fairly reasonable and straightforward syntax that draws many similarities from other languages.
However, ES6 adds quite a few new syntactic forms which are going to take some getting used to. In this chapter we'll tour through them to find out what's in store.
Warning: At the time of this writing, some of the features in this book have been implemented in various browsers (Firefox, Chrome, etc.), but many others have not, or the features are only partially implemented. Your experience may be mixed trying these examples directly. If so, try them out with transpilers, as most of these features are covered by those tools. ES6Fiddle (http://www.es6fiddle.net/) is a great, easy-to-use playground for trying out ES6, as is the online REPL for the Babel transpiler (http://babeljs.io/repl/).
You're probably aware that the fundamental unit of variable scoping in JavaScript has always been the function
. If you needed to create a block of scope, the most prevalent way to do so was the IIFE (immediately invoked function expression), such as:
var a = 2;
(function IIFE(){
var a = 3;
console.log( a ); // 3
})();
console.log( a ); // 2
However, we can now create declarations which are bound to any block, called (unsurprisingly) block scoping. This means all we need is a pair of { .. }
to create a scope. Instead of using var
, which always declares variables attached to the enclosing function (or global, if top level) scope, use let
:
var a = 2;
{
let a = 3;
console.log( a ); // 3
}
console.log( a ); // 2
It's not very common or idiomatic thus far in JS to use a standalone { .. }
block as shown there, but it's always been totally valid. And developers from other languages that have block scoping will readily recognize that pattern.
I'm going to suggest that I think this is the far better way to create block-scoped variables, with a dedicated { .. }
block. Moreover, I will also strongly suggest you should always put the let
declaration(s) at the very top of that block. If you have more than one to declare, I'd recommend using just one let
.
Stylistically, I even prefer to put the let
on the same line as the opening {
, to make it clearer that this block is only for the purpose of declaring the scope for those variables.
{ let a = 2, b, c;
// ..
}
Now, that's going to look strange and it's not likely going to match the recommendations by most other ES6 literature. But I have reasons for my madness.
There's another proposed form of the let
declaration called the let
-block, which looks like:
let (a = 2, b, c) {
// ..
}
That form is what I'd called explicit block scoping, whereas the let ..
declaration form that mirrors var
is more implicit, since it kind of hijacks whatever { .. }
pair it's found in. Generally developers find explicit mechanisms a bit more preferable than implicit mechanisms, and I claim this is one of those cases.
If you compare the previous two snippet forms, they're very similar, and in my opinion both qualify stylistically as explicit block scoping. Unfortunately, the let (..) { .. }
form, the most explicit of the options, was not adopted in ES6. That may be revisited post-ES6, but for now the former option is our best bet, I think.
To reinforce the implicit nature of let ..
declarations, consider these usages:
let a = 2;
if (a > 1) {
let b = a * 3;
console.log( b ); // 6
for (let i = a; i <= b; i++) {
let j = i + 10
console.log( j );
}
// 12 13 14 15 16
let c = a + b;
console.log( c ); // 8
}
Quick quiz without looking back at that snippet: which variable(s) exist only inside the if
statement, and which variable(s) existing only inside the for
loop?
The answers: the if
statement contains b
and c
block-scoped variables, and the for
loop contains i
and j
block-scoped variables.
Did you have to think about it for a moment? Does it surprise you that i
isn't added to the enclosing if
statement scope? That mental pause and questioning -- I call it a "mental tax" -- comes from the fact that this let
mechanism is not only new to us, but it's also implicit.
There's also hazard in the let c = ..
declaration appearing so far down in the scope. Unlike traditional var
-declared variables, which are attached to the entire enclosing function scope regardless of where they appear, let
declarations attach to the block scope but are not initialized until they appear in the block.
Accessing a let
-declared variable earlier than its let ..
declaration/initialization causes an error, whereas with var
declarations the ordering doesn't matter (except stylistically).
Consider:
{
console.log( a ); // undefined
console.log( b ); // ReferenceError!
var a;
let b;
}
Warning: This ReferenceError
from accessing too-early let
-declared references is technically called a TDZ (temporal dead zone) error -- you're accessing a variable that's been declared but not yet initialized. This will not be the only time we see TDZ errors -- they crop up in several places in ES6. Also, note that "initialized" doesn't require explicitly assigning a value in your code, as let b;
is totally valid. A variable that's not given an assignment at declaration time is assumed to have been assigned the undefined
value, so let b;
is the same as let b = undefined;
. Explicit assignment or not, you cannot access b
until the let b
statement is run.
One last gotcha: typeof
behaves differently with TDZ variables than it does with undeclared (or declared!) variables.
{
if (typeof a === "undefined") {
console.log( "cool" );
}
if (typeof b === "undefined") { // ReferenceError!
// ..
}
// ..
let b;
}
The a
is not declared, so typeof
is the only safe way to check for its existence or not. But typeof b
throws the TDZ error because much farther down in the code there happens to be a let b
declaration. Oops.
Now it should be clearer why I strongly prefer -- no, I insist -- let
declarations must all be at the top of the scope. That totally avoids the accidental errors of accessing too early. It also makes it more explicit when you look at the start of a block, any block, what variables it contains.
Your blocks don't have to share their original behavior with scoping behavior.
This explicitness on your part, which is up to you to maintain with discipline, will save you lots of refactor headaches and footguns down the line.
Note: For more information on let
and block scoping, see Chapter 3 of the "Scope & Closures" title of this series.
The only exception I'd make to the preference for the explicit form of let
declaration block'ing is a let
that appears in the header of a for
loop. The reason may seem nuanced, but I consider it to be one of the more important ES6 features.
Consider:
var funcs = [];
for (let i = 0; i < 5; i++) {
funcs.push( function(){
console.log( i );
} );
}
funcs[3](); // 3
The let i
in the for
header declares an i
not just for the for
loop itself, but it redeclares a new i
for each iteration of the loop. That means that closures created inside the loop iteration close over those per-iteration variables the way you'd expect.
If you tried that same snippet but with var i
in the for
loop header, you'd get 5
instead of 3
, because there'd only be one i
in the outer scope that was closed over, instead of a new i
for each iteration's function to close over.
You could also have accomplished the same thing slightly more verbosely:
var funcs = [];
for (var i = 0; i < 5; i++) {
let j = i;
funcs.push( function(){
console.log( j );
} );
}
funcs[3](); // 3
Here, we forcibly create a new j
for each iteration, and then the closure works the same way. I prefer the former approach; that extra special capability is why I endorse the for (let .. ) ..
form. It could be argued it's somewhat more implicit, but it's explicit enough, and useful enough, for my tastes.
There's one other form of block-scoped declaration to consider, the const
, which creates constants.
What exactly is a constant? It's a variable that's read-only after its initial value is set. Consider:
{
const a = 2;
console.log( a ); // 2
a = 3; // TypeError!
}
You are not allowed to change the value of the variable once it's been set, at declaration time. A const
declaration must have an explicit initialization. If you wanted a constant with the undefined
value, you'd have to declare const a = undefined
to get it.
Constants are not a restriction on the value itself, but on the variable assignment of that value. In other words, the value is not frozen, just the assignment of it. If the value is complex, such as an object or array, the contents of the value can still be modified:
{
const a = [1,2,3];
a.push( 4 );
console.log( a ); // [1,2,3,4]
a = 42; // TypeError!
}
The a
variable doesn't actually hold a constant array, it holds a constant reference to the array; the array itself is freely mutable.
Warning: Assigning an object or array as a constant means that value will not be able to be garbage collected until that constant's lexical scope goes away, since the reference to the value can never be unset. That may be desirable, but be careful if it's not your intent!
Essentially, const
declarations enforce what we've stylistically signaled with our code for years, where we declared a variable name of all uppercase letters and assigned it some literal value that we took care never to change. There's no enforcement on a var
assignment, but there is now with a const
assignment, which can help you catch unintended changes.
There's some rumored assumptions that a const
likely will be more optimizable for the JS engine than a let
or var
would be, since the engine knows the variable will never change so it can eliminate some possible tracking.
Whether that is the case or just our own fantasies and intuitions, the much more important decision to make is if you intend constant behavior or not. Don't just use const
on variables that otherwise don't obviously appear to be treated as constants in the code, as that will just lead to more confusion.
Starting with ES6, function declarations that occur inside of blocks are now specified to be scoped to that block. Prior to ES6, the specification did not call for this, but many implementations did it anyway. So now the specification meets reality.
Consider:
{
foo(); // works!
function foo() {
// ..
}
}
foo(); // ReferenceError
The foo()
function is declared inside the { .. }
block, and as of ES6 is block-scoped there. So it's not available outside that block. But also note that it is "hoisted" within the block, as opposed to let
declarations which suffer the TDZ error trap mentioned earlier.
Block-scoping of function declarations could be a problem if you've ever written code like this before, and relied on the old legacy non-block-scoped behavior:
if (something) {
function foo() {
console.log( "1" );
}
}
else {
function foo() {
console.log( "2" );
}
}
foo(); // ??
In pre-ES6 compliant environments, foo()
would print "2"
regardless of the value of something
, since both function declarations were hoisted out of the blocks, and the second one always wins.
In ES6, that last line throws a ReferenceError
.
ES6 introduces a new ...
operator that's typically referred to as the spread or rest operator, depending on where/how it's used. Let's take a look:
function foo(x,y,z) {
console.log( x, y, z );
}
foo( ...[1,2,3] ); // 1 2 3
When ...
is used in front of an array (actually, any iterable, which we cover in Chapter 3), it acts to "spread" it out into its individual values.
You'll typically see that usage as is shown in that previous snippet, when spreading out an array as a set of arguments to a function call. In this usage, ...
acts to give us a simpler syntactic replacement for the apply(..)
method, which we would typically have used pre-ES6 as:
foo.apply( null, [1,2,3] ); // 1 2 3
But ...
can be used to spread out/expand a value in other contexts as well, such as inside another array declaration:
var a = [2,3,4];
var b = [ 1, ...a, 5 ];
console.log( b ); // [1,2,3,4,5]
In this usage, ...
is basically replacing concat(..)
, as the above behaves like [1].concat( a, [5] )
.
The other common usage of ...
can be seen as almost the opposite; instead of spreading a value out, the ...
gathers a set of values together into an array. Consider:
function foo(x, y, ...z) {
console.log( x, y, z );
}
foo( 1, 2, 3, 4, 5 ); // 1 2 [3,4,5]
The ...z
in this snippet is essentially saying: "gather the rest of the arguments (if any) into an array called z
." Since x
was assigned 1
, and y
was assigned 2
, the rest of the arguments 3
, 4
, and 5
were gathered into z
.
Of course, if you don't have any named parameters, the ...
gathers all arguments:
function foo(...args) {
console.log( args );
}
foo( 1, 2, 3, 4, 5); // [1,2,3,4,5]
Note: The ...args
in the foo(..)
function declaration is usually called "rest parameters", since you're collecting the rest of the parameters. I prefer "gather", since it's more descriptive of what it does, not what it contains.
The best part about this usage is that is provides a very solid alternative to using the long-since deprecated arguments
array -- actually, it's not really an array, but an array-like object. Since args
(or whatever you call it -- a lot of people prefer r
or rest
) is a real array, we can get rid of lots of silly pre-ES6 tricks we jumped through to make arguments
into something we can treat as an array.
Consider:
// doing things the new ES6 way
function foo(...args) {
// `args` is already a real array
// discard first element in `args`
args.shift();
// pass along all of `args` as arguments
// to `console.log(..)`
console.log( ...args );
}
// doing things the old-school pre-ES6 way
function bar() {
// turn `arguments` into a real array
var args = Array.prototype.slice.call( arguments );
// add some elements on the end
args.push( 4, 5 );
// filter out odd numbers
args = args.filter( function(v){
return v % 2 == 0;
} );
// pass along all of `args` as arguments
// to `foo(..)`
foo.apply( null, args );
}
bar( 0, 1, 2, 3 ); // 2 4
The ...args
in the foo(..)
function declaration gathers arguments, and the ...args
in the console.log(..)
call spreads them out. That's a good illustration of the symmetric but opposite uses of the ...
operator.
Besides the ...
usage in a function declaration, there's another case where ...
is used for gathering values, and we'll look at it in the "Too Many, Too Few, Just Enough" section later in this chapter.
Perhaps one of the most common idioms in JavaScript relates to setting a default value for a function parameter. The way we've done this for years should look quite familiar:
function foo(x,y) {
x = x || 11;
y = y || 31;
console.log( x + y );
}
foo(); // 42
foo( 5, 6 ); // 11
foo( 5 ); // 36
foo( null, 6 ); // 17
Of course, if you've used this pattern before, you know that it's both helpful and a little bit dangerous, if for example you need to be able to pass in what would otherwise be considered a falsy value for one of the parameters. Consider:
foo( 0, 42 ); // 53 <-- Oops, not 42
Why? Because the 0
is falsy, and so the x || 11
results in 11
, not the directly passed in 0
.
To fix this gotcha, some people will instead write the check more verbosely like this:
function foo(x,y) {
x = (x !== undefined) ? x : 11;
y = (y !== undefined) ? y : 31;
console.log( x + y );
}
foo( 0, 42 ); // 42
foo( undefined, 6 ); // 17
Of course, that means that any value except undefined
can be directly passed in, but undefined
will be assumed to be, "I didn't pass this in." That works great unless you actually need to be able to pass undefined
in.
In that case, you could test to see if the argument is actually omitted, by it actually not being present in the arguments
array, perhaps like this:
function foo(x,y) {
x = (0 in arguments) ? x : 11;
y = (1 in arguments) ? y : 31;
console.log( x + y );
}
foo( 5 ); // 36
foo( 5, undefined ); // NaN
But how would you omit the first x
argument without the ability to pass in any kind of value (not even undefined
) that signals, "I'm omitting this argument."?
foo(,5)
is tempting, but it's invalid syntax. foo.apply(null,[,5])
seems like it should do the trick, but apply(..)
's quirks here mean that the arguments are treated as [undefined,5]
, which of course doesn't omit.
If you investigate further, you'll find you can only omit arguments on the end (i.e., righthand side) by simply passing fewer arguments than "expected", but you cannot omit arguments in the middle or at the beginning of the arguments list. It's just not possible.
There's a principle applied to JavaScript's design here which is important to remember: undefined
means missing. That is, there's no difference between undefined
and missing, at least as far as function arguments go.
Warning: There are, confusingly, other places in JS where this particular design principle doesn't apply, such as for arrays with empty slots. See the Types & Grammar title of this series for more information.
With all this mind, we can now examine a nice helpful syntax added as of ES6 to streamline the assignment of default values to missing arguments:
function foo(x = 11, y = 31) {
console.log( x + y );
}
foo(); // 42
foo( 5, 6 ); // 11
foo( 0, 42 ); // 42
foo( 5 ); // 36
foo( 5, undefined ); // 36 <-- `undefined` is missing
foo( 5, null ); // 5 <-- null coerces to `0`
foo( undefined, 6 ); // 17 <-- `undefined` is missing
foo( null, 6 ); // 6 <-- null coerces to `0`
Notice the results and how they imply both subtle differences and similarities to the earlier approaches.
x = 11
in a function declaration is more like x !== undefined ? x : 11
than the much more common idiom x || 11
, so you'll need to be careful in converting your pre-ES6 code to this ES6 default parameter value syntax.
Function default values can be more than just simple values like 31
; they can be any valid expression, even a function call:
function bar(val) {
console.log( "bar called!" );
return y + val;
}
function foo(x = y + 3, z = bar( x )) {
console.log( x, z );
}
var y = 5;
foo(); // "bar called"
// 8 13
foo( 10 ); // "bar called"
// 10 15
y = 6;
foo( undefined, 10 ); // 9 10
As you can see, the default value expressions are lazily evaluated, meaning they're only run if and when they're needed -- that is, when a parameter's argument is omitted or is undefined
.
It's a subtle detail, but the formal parameters in a function declaration are in their own scope -- think of it as a scope bubble wrapped around just the ( .. )
of the function declaration -- not in the function body's scope. That means a reference to an identifier in a default value expression first matches the formal parameters' scope before looking to an outer scope. See the Scope & Closures title of this series for more information.
Consider:
var w = 1, z = 2;
function foo( x = w + 1, y = x + 1, z = z + 1 ) {
console.log( x, y, z );
}
foo(); // ReferenceError
The w
in the w + 1
default value expression looks for w
in the formal parameters' scope, but does not find it, so the outer scope's w
is used. Next, The x
in the x + 1
default value expression finds x
in the formal parameters' scope, and luckily x
has already been initialized, so the assignment to y
works fine.
However, the z
in z + 1
finds z
as a not-yet-initialized-at-that-moment parameter variable, so it never tries to find the z
from the outer scope.
As we mentioned in the "let
Declarations" section earlier in this chapter, ES6 has a TDZ which prevents a variable from being accessed in its uninitialized state. As such, the z + 1
default value expression throws a TDZ ReferenceError
error.
Though it's not necessarily a good idea for code clarity, a default value expression can even be an inline function expression call -- commonly referred to as an Immediately Invoked Function Expression (IIFE):
function foo( x =
(function(v){ return v + 11; })( 31 )
) {
console.log( x );
}
foo(); // 42
There will very rarely be any cases where an IIFE (or any other executed inline function expression) will be appropriate for default value expressions. If you find yourself want to do this, take a step back and reevaluate!
Warning: If the IIFE had tried to access the x
identifier and had not declared its own x
, this would also have been a TDZ error, just as discussed before.
The default value expression in the previous snippet is an IIFE in that in its a function that's executed right inline, via (31)
. If we had left that part off, the default value assigned to x
would have just been a function reference itself, perhaps like a default callback. There will probably be cases where that pattern will be quite useful, such as:
function ajax(url, cb = function(){}) {
// ..
}
ajax( "http://some.url.1" );
In this case, we essentially want to default cb
to be a no-op empty function call if not otherwise specified. The function expression is just a function reference, not a function call itself (no invoking ()
on the end of it), which accomplishes that goal.
Since the early days of JS, there's been a little known but useful quirk available to us: Function.prototype
is itself an empty no-op function. So, the declaration could have been cb = Function.prototype
and saved the inline function expression creation.
ES6 introduces a new syntactic feature called destructuring, which may be a little less confusing sounding if you instead think of it as structured assignment. To understand this meaning, consider:
function foo() {
return [1,2,3];
}
var tmp = foo(),
a = tmp[0], b = tmp[1], c = tmp[2];
console.log( a, b, c ); // 1 2 3
As you can see, we created a manual assignment of the values in the array that foo()
returns to individual variables a
, b
, and c
, and to do so we (unfortunately) needed the tmp
variable.
We can do similar with objects:
function bar() {
return {
x: 4,
y: 5,
z: 6
};
}
var tmp = bar(),
x = tmp.x, y = tmp.y, z = tmp.z;
console.log( x, y, z ); // 4 5 6
The tmp.x
property value is assigned to the x
variable, and likewise for tmp.y
to y
and tmp.z
to z
.
Manually assigning indexed values from an array or properties from an object can be thought of as structured assignment. To put this into ES6 terms, it's called destructuring assignment.
Specifically, ES6 introduces dedicated syntax for array destructuring and object destructuring, which eliminates the need for the tmp
variable in the previous snippets, making them much cleaner. Consider:
var [ a, b, c ] = foo();
var { x: x, y: y, z: z } = bar();
console.log( a, b, c ); // 1 2 3
console.log( x, y, z ); // 4 5 6
You're likely more used to seeing syntax like [a,b,c]
on the righthand side of an =
assignment, as the value being assigned.
Destructuring symmetrically flips that pattern, so that [a,b,c]
on the lefthand side of the =
assignment is treated as a kind of "pattern" for decomposing the righthand side array value into separate variable assignments.
Similarly, { x: x, y: y, z: z }
specifies a "pattern" to decompose the object value from bar()
into separate variable assignments.
Let's dig into that { x: x, .. }
syntax from the previous snippet. If the property name being matched is the same as the variable you want to declare, you can actually shorten the syntax:
var { x, y, z } = bar();
console.log( x, y, z ); // 4 5 6
Cool, huh!?
But is { x, .. }
leaving off the x:
part or leaving off the : x
part? As we'll see shortly, we're actually leaving off the x:
part when we use the shorter syntax. That may not seem like an important detail, but you'll understand its importance.
If you can write the shorter form, why would you ever write out the longer form? Because that longer form actually allows you to assign a property to a different variable name, which can sometimes be quite useful:
var { x: bam, y: baz, z: bap } = bar();
console.log( bam, baz, bap ); // 4 5 6
console.log( x, y, z ); // ReferenceError
There's a subtle but super important quirk to understand about this variation of the object destructuring form. To illustrate why it can be a gotcha you need to be careful of, let's consider the "pattern" of how normal object literals are specified:
var X = 10, Y = 20;
var o = { a: X, b: Y };
console.log( o.a, o.b ); // 10 20
In { a: X, b: Y }
, we know that a
is the object property, and X
is the source value that gets assigned to it. In other words, the syntactic pattern is target: source
, or more obviously, property-alias: value
. We intuitively understand this because it's the same as =
assignment, where the pattern is target = source
.
However, when you use object destructuring assignment -- that is, putting the { .. }
object literal looking syntax on the lefthand side of the =
operator -- you invert that target: source
pattern.
Recall:
var { x: bam, y: baz, z: bap } = bar();
The syntactic pattern here is source: target
(or value: variable-alias
). x: bam
means the x
property is the source value and bam
is the target variable to assign to. In other words, object literals are target <= source
, and object destructuring assignments are source => target
. See how that's flipped?
There's another way to think about this syntax though, which may help ease the confusion. Consider:
var aa = 10, bb = 20;
var o = { x: aa, y: bb };
var { x: AA, y: BB } = o;
console.log( AA, BB ); // 10 20
In the { x: aa, y: bb }
line, the x
and y
represent the object properties. In the { x: AA, y: BB }
line, the x
and the y
also represent the object properties.
Recall earlier I asserted that { x, .. }
was leaving off the x:
part? In those two lines, if you erase the x:
and y:
parts in that snippet, you're left only with aa
, bb
, AA
, and BB
, which in effect are assignments from aa
to AA
and from bb
to BB
. That's actually what we've accomplished with the snippet.
So, that symmetry may help to explain why the syntactic pattern was intentionally flipped for this ES6 feature.
Note: I would have preferred the syntax to be { AA: x , BB: y }
for the destructuring assignment, since that would have preserved consistency of the more familiar target: source
pattern for both usages. Alas, I'm having to train my brain for the inversion, as some readers may also have to do.
So far, we've used destructuring assignment with var
declarations -- of course, they could also use let
and const
-- but destructuring is a general assignment operation, not just a declaration.
Consider:
var a, b, c, x, y, z;
[a,b,c] = foo();
( { x, y, z } = bar() );
console.log( a, b, c ); // 1 2 3
console.log( x, y, z ); // 4 5 6
The variables can already be declared, and then the destructuring only does assignments, exactly as we've already seen.
Note: For the object destructuring form specifically, when leaving off a var
/let
/const
declarator, we had to surround the whole assignment expression in ( )
, because otherwise the { .. }
on the left-hand side as the first element in the statement is taken to be a statement block instead of an object.
In fact, the assignment expressions (a
, y
, etc.) don't actually need to be just variable identifiers. Anything that's a valid assignment expression is valid. For example:
var o = {};
[o.a, o.b, o.c] = foo();
( { x: o.x, y: o.y, z: o.z } = bar() );
console.log( o.a, o.b, o.c ); // 1 2 3
console.log( o.x, o.y, o.z ); // 4 5 6
You can even use computed property expressions in the destructuring. Consider:
var which = "x",
o = {};
( { [which]: o[which] } = bar() );
console.log( o.x ); // 4
The [which]:
part is the computed property, which results in x
-- the property to destructure from the object in question as the source of the assignment. The o[which]
part is just a normal object key reference, which equates to o.x
as the target of the assignment.
You can use the general assignments to create object mappings/transformations, such as:
var o1 = { a: 1, b: 2, c: 3 },
o2 = {};
( { a: o2.x, b: o2.y, c: o2.z } = o1 );
console.log( o2.x, o2.y, o2.z ); // 1 2 3
Or you can map an object to an array, such as:
var o1 = { a: 1, b: 2, c: 3 },
a2 = [];
( { a: a2[0], b: a2[1], c: a2[2] } = o1 );
console.log( a2 ); // [1,2,3]
Or the other way around:
var a1 = [ 1, 2, 3 ],
o2 = {};
[ o2.a, o2.b, o2.c ] = a1;
console.log( o2.a, o2.b, o2.c ); // 1 2 3
Or you could reorder one array to another:
var a1 = [ 1, 2, 3 ],
a2 = [];
[ a2[2], a2[0], a2[1] ] = a1;
console.log( a2 ); // [2,3,1]
You can even solve the traditional "swap two variables" task without a temporary variable:
var x = 10, y = 20;
[ y, x ] = [ x, y ];
console.log( x, y ); // 20 10
Warning: Be careful not try to mix in declaration with assignment unless you want all of the assignment expressions also to be treated as declarations. Otherwise, you'll get syntax errors. That's why in the earlier example I had to do var a2 = []
separately from the [ a2[0], .. ] = ..
destructuring assignment. It wouldn't make any sense to try var [ a2[0], .. ] = ..
, since a2[0]
isn't a valid declaration identifier; it also obviously couldn't implicitly create a var a2 = []
declaration.
The assignment expression with object or array destructuring has as its completion value the full right-hand object/array value. Consider:
var o = { a:1, b:2, c:3 },
a, b, c, p;
p = {a,b,c} = o;
console.log( a, b, c ); // 1 2 3
p === o; // true
In the previous snippet, p
was assigned the o
object reference, not one of the a
, b
, or c
values. The same is true of array destructuring: