diff --git a/.gitignore b/.gitignore
index e44636dc5bb..f85addaf207 100644
--- a/.gitignore
+++ b/.gitignore
@@ -18,6 +18,7 @@ node_g
# various stuff that VC++ produces/uses
Debug/
Release/
+!doc/blog/**
*.sln
!nodemsi.sln
*.suo
@@ -39,6 +40,5 @@ ipch/
/npm.wxs
/tools/msvs/npm.wixobj
email.md
-blog.html
deps/v8-*
-node_modules
+./node_modules
diff --git a/AUTHORS b/AUTHORS
index 4aa563fbfa9..a389d07ce48 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -184,7 +184,7 @@ Yoshihiro KIKUCHI
Brett Kiefer
Mariano Iglesias
Jörn Horstmann
-Joe Shaw
+Joe Shaw
Alex Xu
Kip Gebhardt
Stefan Rusu
@@ -314,3 +314,11 @@ Kevin Gadd
Ray Solomon
Kevin Bowman
Jeroen Janssen
+Matt Gollob
+Simon Sturmer
+Joel Brandt
+Marc Harter
+Nuno Job
+Ben Kelly
+Felix Böhm
+Gabriel de Perthuis
diff --git a/ChangeLog b/ChangeLog
index b2ce37d586a..03a7fdca0bf 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,4 +1,185 @@
-2012.05.28, Version 0.7.9 (unstable)
+2012.06.29, Version 0.8.1 (stable)
+
+* V8: upgrade to v3.11.10.12
+
+* npm: upgrade to v1.1.33
+ - Support for parallel use of the cache folder
+ - Retry on registry timeouts or network failures (Trent Mick)
+ - Reduce 'engines' failures to a warning
+ - Use new zsh completion if aviailable (Jeremy Cantrell)
+
+* Fix #3577 Un-break require('sys')
+
+* util: speed up formatting of large arrays/objects (Ben Noordhuis)
+
+* windows: make fs.realpath(Sync) work with UNC paths (Bert Belder)
+
+* build: fix --shared-v8 option (Ben Noordhuis)
+
+* doc: `detached` is a boolean (Andreas Madsen)
+
+* build: use proper python interpreter (Ben Noordhuis)
+
+* build: expand ~ in `./configure --prefix=~/a/b/c` (Ben Noordhuis)
+
+* build: handle CC env var with spaces (Gabriel de Perthuis)
+
+* build: fix V8 build when compiling with gcc 4.5 (Ben Noordhuis)
+
+* build: fix --shared-v8 option (Ben Noordhuis)
+
+* windows msi: Fix icon issue which caused huge file size (Bert Belder)
+
+* unix: assume that dlopen() may clobber dlerror() (Ben Noordhuis)
+
+* sunos: fix memory corruption bugs (Ben Noordhuis)
+
+* windows: better (f)utimes and (f)stat (Bert Belder)
+
+
+2012.06.25, Version 0.8.0 (stable), 8b8a7a7f9b41e74e1e810d0330738ad06fc302ec
+
+* V8: upgrade to v3.11.10.10
+
+* npm: Upgrade to 1.1.32
+
+* Deprecate iowatcher (Ben Noordhuis)
+
+* windows: update icon (Bert Belder)
+
+* http: Hush 'MUST NOT have a body' warnings to debug() (isaacs)
+
+* Move blog.nodejs.org content into repository (isaacs)
+
+* Fix #3503: stdin: resume() on pipe(dest) (isaacs)
+
+* crypto: fix error reporting in SetKey() (Fedor Indutny)
+
+* Add --no-deprecation and --trace-deprecation command-line flags (isaacs)
+
+* fs: fix fs.watchFile() (Ben Noordhuis)
+
+* fs: Fix fs.readfile() on pipes (isaacs)
+
+* Rename GYP variable node_use_system_openssl to be consistent (Ryan Dahl)
+
+
+2012.06.19, Version 0.7.12 (unstable), a72120190a8ffdbcd3d6ad2a2e6ceecd2087111e
+
+* npm: Upgrade to 1.1.30
+ - Improved 'npm init'
+ - Fix the 'cb never called' error from 'oudated' and 'update'
+ - Add --save-bundle|-B config
+ - Fix isaacs/npm#2465: Make npm script and windows shims cygwin-aware
+ - Fix isaacs/npm#2452 Use --save(-dev|-optional) in npm rm
+ - `logstream` option to replace removed `logfd` (Rod Vagg)
+ - Read default descriptions from README.md files
+
+* Shims to support deprecated ev_* and eio_* methods (Ben Noordhuis)
+
+* #3118 net.Socket: Delay pause/resume until after connect (isaacs)
+
+* #3465 Add ./configure --no-ifaddrs flag (isaacs)
+
+* child_process: add .stdin stream to forks (Fedor Indutny)
+
+* build: fix `make install DESTDIR=/path` (Ben Noordhuis)
+
+* tls: fix off-by-one error in renegotiation check (Ben Noordhuis)
+
+* crypto: Fix diffie-hellman key generation UTF-8 errors (Fedor Indutny)
+
+* node: change the constructor name of process from EventEmitter to process (Andreas Madsen)
+
+* net: Prevent property access throws during close (Reid Burke)
+
+* querystring: improved speed and code cleanup (Felix Böhm)
+
+* sunos: fix assertion errors breaking fs.watch() (Fedor Indutny)
+
+* unix: stat: detect sub-second changes (Ben Noordhuis)
+
+* add stat() based file watcher (Ben Noordhuis)
+
+
+2012.06.15, Version 0.7.11 (unstable), 5cfe0b86d5be266ef51bbba369c39e412ee51944
+
+* V8: Upgrade to v3.11.10
+
+* npm: Upgrade to 1.1.26
+
+* doc: Improve cross-linking in API docs markdown (Ben Kelly)
+
+* Fix #3425: removeAllListeners should delete array (Reid Burke)
+
+* cluster: don't silently drop messages when the write queue gets big (Bert Belder)
+
+* Add Buffer.concat method (isaacs)
+
+* windows: make symlinks tolerant to forward slashes (Bert Belder)
+
+* build: Add node.d and node.1 to installer (isaacs)
+
+* cluster: rename worker.unqiueID to worker.id (Andreas Madsen)
+
+* Windows: Enable ETW events on Windows for existing DTrace probes. (Igor Zinkovsky)
+
+* test: bundle node-weak in test/gc so that it doesn't need to be downloaded (Nathan Rajlich)
+
+* Make many tests pass on Windows (Bert Belder)
+
+* Fix #3388 Support listening on file descriptors (isaacs)
+
+* Fix #3407 Add os.tmpDir() (isaacs)
+
+* Unbreak the snapshotted build on Windows (Bert Belder)
+
+* Clean up child_process.kill throws (Bert Belder)
+
+* crypto: make cipher/decipher accept buffer args (Ben Noordhuis)
+
+
+2012.06.11, Version 0.7.10 (unstable), 12a32a48a30182621b3f8e9b9695d1946b53c131
+
+* Roll V8 back to 3.9.24.31
+
+* build: x64 target should always pass -m64 (Robert Mustacchi)
+
+* add NODE_EXTERN to node::Start (Joel Brandt)
+
+* repl: Warn about running npm commands (isaacs)
+
+* slab_allocator: fix crash in dtor if V8 is dead (Ben Noordhuis)
+
+* slab_allocator: fix leak of Persistent handles (Shigeki Ohtsu)
+
+* windows/msi: add node.js prompt to startmenu (Jeroen Janssen)
+
+* windows/msi: fix adding node to PATH (Jeroen Janssen)
+
+* windows/msi: add start menu links when installing (Jeroen Janssen)
+
+* windows: don't install x64 version into the 'program files (x86)' folder (Matt Gollob)
+
+* domain: Fix #3379 domain.intercept no longer passes error arg to cb (Marc Harter)
+
+* fs: make callbacks run in global context (Ben Noordhuis)
+
+* fs: enable fs.realpath on windows (isaacs)
+
+* child_process: expose UV_PROCESS_DETACHED as options.detached (Charlie McConnell)
+
+* child_process: new stdio API for .spawn() method (Fedor Indutny)
+
+* child_process: spawn().ref() and spawn().unref() (Fedor Indutny)
+
+* Upgrade npm to 1.1.25
+ - Enable npm link on windows
+ - Properly remove sh-shim on Windows
+ - Abstract out registry client and logger
+
+
+2012.05.28, Version 0.7.9 (unstable), 782277f11a753ded831439ed826448c06fc0f356
* Upgrade V8 to 3.11.1
diff --git a/LICENSE b/LICENSE
index f464affffc1..adc86f22a6c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -196,7 +196,7 @@ maintained libraries. The externally maintained libraries used by Node are:
"""
- C-Ares, an asynchronous DNS client, located at deps/uv/src/ares. C-Ares license
- follows
+ follows:
"""
/* Copyright 1998 by the Massachusetts Institute of Technology.
*
@@ -215,7 +215,7 @@ maintained libraries. The externally maintained libraries used by Node are:
- OpenSSL located at deps/openssl. OpenSSL is cryptographic software written
by Eric Young (eay@cryptsoft.com) to provide SSL/TLS encryption. OpenSSL's
- license follows
+ license follows:
"""
/* ====================================================================
* Copyright (c) 1998-2011 The OpenSSL Project. All rights reserved.
@@ -225,7 +225,7 @@ maintained libraries. The externally maintained libraries used by Node are:
* are met:
*
* 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
+ * notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
@@ -296,11 +296,11 @@ maintained libraries. The externally maintained libraries used by Node are:
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- IN THE SOFTWARE.
+ IN THE SOFTWARE.
"""
- Closure Linter is located at tools/closure_linter. Closure's license
- follows
+ follows:
"""
# Copyright (c) 2007, Google Inc.
# All rights reserved.
@@ -403,7 +403,7 @@ maintained libraries. The externally maintained libraries used by Node are:
* Available under MIT license
"""
-- tools/gyp GYP is a meta-build system. GYP's license follows:
+- tools/gyp. GYP is a meta-build system. GYP's license follows:
"""
Copyright (c) 2009 Google Inc. All rights reserved.
@@ -434,7 +434,7 @@ maintained libraries. The externally maintained libraries used by Node are:
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
-- Zlib at deps/zlib. zlib's license follows
+- Zlib at deps/zlib. zlib's license follows:
"""
/* zlib.h -- interface of the 'zlib' general purpose compression library
version 1.2.4, March 14th, 2010
@@ -463,8 +463,8 @@ maintained libraries. The externally maintained libraries used by Node are:
*/
"""
-- npm is a package manager program located at deps/npm.
- npm's license follows:
+- npm is a package manager program located at deps/npm.
+ npm's license follows:
"""
Copyright 2009-2012, Isaac Z. Schlueter (the "Original Author")
All rights reserved.
@@ -517,6 +517,11 @@ maintained libraries. The externally maintained libraries used by Node are:
"npm Logo" created by Mathias Pettersson and Brian Hammond,
used with permission.
+ "Gubblebum Blocky" font
+ Copyright (c) 2007 by Tjarda Koster, http://jelloween.deviantart.com
+ included for use in the npm website and documentation,
+ used with permission.
+
This program uses "node-uuid", Copyright (c) 2010 Robert Kieffer,
according to the terms of the MIT license.
@@ -527,8 +532,8 @@ maintained libraries. The externally maintained libraries used by Node are:
according to the terms of the MIT/X11 license.
"""
-- tools/doc/node_modules/marked Marked is a Markdown parser. Marked's
- license follows
+- tools/doc/node_modules/marked. Marked is a Markdown parser. Marked's
+ license follows:
"""
Copyright (c) 2011-2012, Christopher Jeffrey (https://github.com/chjj/)
@@ -551,8 +556,26 @@ maintained libraries. The externally maintained libraries used by Node are:
THE SOFTWARE.
"""
-- src/ngx-queue.h ngx-queue.h is taken from the nginx source tree. nginx's
- license follows
+- test/gc/node_modules/weak. Node-weak is a node.js addon that provides garbage
+ collector notifications. Node-weak's license follows:
+ """
+ Copyright (c) 2011, Ben Noordhuis
+
+ Permission to use, copy, modify, and/or distribute this software for any
+ purpose with or without fee is hereby granted, provided that the above
+ copyright notice and this permission notice appear in all copies.
+
+ THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ """
+
+- src/ngx-queue.h. ngx-queue.h is taken from the nginx source tree. nginx's
+ license follows:
"""
Copyright (C) 2002-2012 Igor Sysoev
Copyright (C) 2011,2012 Nginx, Inc.
diff --git a/Makefile b/Makefile
index a80da5141fb..51a8b3ea92c 100644
--- a/Makefile
+++ b/Makefile
@@ -31,7 +31,7 @@ out/Debug/node:
$(MAKE) -C out BUILDTYPE=Debug
out/Makefile: common.gypi deps/uv/uv.gyp deps/http_parser/http_parser.gyp deps/zlib/zlib.gyp deps/v8/build/common.gypi deps/v8/tools/gyp/v8.gyp node.gyp config.gypi
- tools/gyp_node -f make
+ $(PYTHON) tools/gyp_node -f make
install: all
out/Release/node tools/installer.js install $(DESTDIR)
@@ -61,16 +61,16 @@ test-http1: all
test-valgrind: all
$(PYTHON) tools/test.py --mode=release --valgrind simple message
-node_modules/weak:
+test/gc/node_modules/weak/build:
@if [ ! -f node ]; then make all; fi
- @if [ ! -d node_modules ]; then mkdir -p node_modules; fi
- ./node deps/npm/bin/npm-cli.js install weak \
- --prefix="$(shell pwd)" --unsafe-perm # go ahead and run as root.
+ ./node deps/npm/node_modules/node-gyp/bin/node-gyp rebuild \
+ --directory="$(shell pwd)/test/gc/node_modules/weak" \
+ --nodedir="$(shell pwd)"
-test-gc: all node_modules/weak
+test-gc: all test/gc/node_modules/weak/build
$(PYTHON) tools/test.py --mode=release gc
-test-all: all node_modules/weak
+test-all: all test/gc/node_modules/weak/build
$(PYTHON) tools/test.py --mode=debug,release
make test-npm
@@ -130,7 +130,13 @@ website_files = \
out/doc/changelog.html \
$(doc_images)
-doc: program $(apidoc_dirs) $(website_files) $(apiassets) $(apidocs) tools/doc/
+doc: program $(apidoc_dirs) $(website_files) $(apiassets) $(apidocs) tools/doc/ blog
+
+blogclean:
+ rm -rf out/blog
+
+blog: doc/blog out/Release/node tools/blog
+ out/Release/node tools/blog/generate.js doc/blog/ out/blog/ doc/blog.html doc/rss.xml
$(apidoc_dirs):
mkdir -p $@
@@ -160,6 +166,9 @@ email.md: ChangeLog tools/email-footer.md
blog.html: email.md
cat $< | ./node tools/doc/node_modules/.bin/marked > $@
+blog-upload: blog
+ rsync -r out/blog/ node@nodejs.org:~/web/nodejs.org/blog/
+
website-upload: doc
rsync -r out/doc/ node@nodejs.org:~/web/nodejs.org/
ssh node@nodejs.org '\
@@ -208,6 +217,17 @@ $(PKG):
--out $(PKG)
$(TARBALL): node out/doc
+ @if [ "$(shell git status --porcelain | egrep -v '^\?\? ')" = "" ]; then \
+ exit 0 ; \
+ else \
+ echo "" >&2 ; \
+ echo "The git repository is not clean." >&2 ; \
+ echo "Please commit changes before building release tarball." >&2 ; \
+ echo "" >&2 ; \
+ git status --porcelain | egrep -v '^\?\?' >&2 ; \
+ echo "" >&2 ; \
+ exit 1 ; \
+ fi
@if [ $(shell ./node --version) = "$(VERSION)" ]; then \
exit 0; \
else \
@@ -218,11 +238,12 @@ $(TARBALL): node out/doc
exit 1 ; \
fi
git archive --format=tar --prefix=$(TARNAME)/ HEAD | tar xf -
- mkdir -p $(TARNAME)/doc
+ mkdir -p $(TARNAME)/doc/api
cp doc/node.1 $(TARNAME)/doc/node.1
- cp -r out/doc/api $(TARNAME)/doc/api
+ cp -r out/doc/api/* $(TARNAME)/doc/api/
rm -rf $(TARNAME)/deps/v8/test # too big
rm -rf $(TARNAME)/doc/images # too big
+ find $(TARNAME)/ -type l | xargs rm # annoying on windows
tar -cf $(TARNAME).tar $(TARNAME)
rm -rf $(TARNAME)
gzip -f -9 $(TARNAME).tar
@@ -248,4 +269,4 @@ cpplint:
lint: jslint cpplint
-.PHONY: lint cpplint jslint bench clean docopen docclean doc dist distclean check uninstall install install-includes install-bin all program staticlib dynamiclib test test-all website-upload pkg
+.PHONY: lint cpplint jslint bench clean docopen docclean doc dist distclean check uninstall install install-includes install-bin all program staticlib dynamiclib test test-all website-upload pkg blog blogclean
diff --git a/benchmark/io.js b/benchmark/io.js
index 505d8d03aae..1c18e05f615 100644
--- a/benchmark/io.js
+++ b/benchmark/io.js
@@ -62,7 +62,7 @@ function readtest(size, bsize) {
function wt(tsize, bsize, done) {
var start = Date.now();
- s = writetest(tsize, bsizes[0]);
+ s = writetest(tsize, bsize);
s.addListener('close', function() {
var end = Date.now();
var diff = end - start;
@@ -73,7 +73,7 @@ function wt(tsize, bsize, done) {
function rt(tsize, bsize, done) {
var start = Date.now();
- s = readtest(tsize, bsizes[0]);
+ s = readtest(tsize, bsize);
s.addListener('close', function() {
var end = Date.now();
var diff = end - start;
diff --git a/common.gypi b/common.gypi
index 8b3e7c2e740..8d604141857 100644
--- a/common.gypi
+++ b/common.gypi
@@ -1,6 +1,6 @@
{
'variables': {
- 'strict_aliasing%': 'false', # turn on/off -fstrict-aliasing
+ 'node_no_strict_aliasing%': 0, # turn off -fstrict-aliasing
'visibility%': 'hidden', # V8's visibility setting
'target_arch%': 'ia32', # set v8's target architecture
'host_arch%': 'ia32', # set v8's host architecture
@@ -52,7 +52,7 @@
# pull in V8's postmortem metadata
'ldflags': [ '-Wl,-z,allextract' ]
}],
- ['strict_aliasing!="true"', {
+ ['node_no_strict_aliasing==1', {
'cflags': [ '-fno-strict-aliasing' ],
}],
],
@@ -145,6 +145,10 @@
'cflags': [ '-m32' ],
'ldflags': [ '-m32' ],
}],
+ [ 'target_arch=="x64"', {
+ 'cflags': [ '-m64' ],
+ 'ldflags': [ '-m64' ],
+ }],
[ 'OS=="linux"', {
'ldflags': [ '-rdynamic' ],
}],
diff --git a/configure b/configure
index 8633d48c31d..d324696f40d 100755
--- a/configure
+++ b/configure
@@ -65,20 +65,43 @@ parser.add_option("--shared-v8-libname",
dest="shared_v8_libname",
help="Alternative lib name to link to (default: 'v8')")
+parser.add_option("--shared-openssl",
+ action="store_true",
+ dest="shared_openssl",
+ help="Link to a shared OpenSSl DLL instead of static linking")
+
+parser.add_option("--shared-openssl-includes",
+ action="store",
+ dest="shared_openssl_includes",
+ help="Directory containing OpenSSL header files")
+
+parser.add_option("--shared-openssl-libpath",
+ action="store",
+ dest="shared_openssl_libpath",
+ help="A directory to search for the shared OpenSSL DLLs")
+
+parser.add_option("--shared-openssl-libname",
+ action="store",
+ dest="shared_openssl_libname",
+ help="Alternative lib name to link to (default: 'crypto,ssl')")
+
+# deprecated
parser.add_option("--openssl-use-sys",
action="store_true",
- dest="openssl_use_sys",
- help="Use the system OpenSSL instead of one included with Node")
+ dest="shared_openssl",
+ help=optparse.SUPPRESS_HELP)
+# deprecated
parser.add_option("--openssl-includes",
action="store",
- dest="openssl_includes",
- help="A directory to search for the OpenSSL includes")
+ dest="shared_openssl_includes",
+ help=optparse.SUPPRESS_HELP)
+# deprecated
parser.add_option("--openssl-libpath",
action="store",
- dest="openssl_libpath",
- help="A directory to search for the OpenSSL libraries")
+ dest="shared_openssl_libpath",
+ help=optparse.SUPPRESS_HELP)
parser.add_option("--no-ssl2",
action="store_true",
@@ -115,6 +138,16 @@ parser.add_option("--without-dtrace",
dest="without_dtrace",
help="Build without DTrace")
+parser.add_option("--with-etw",
+ action="store_true",
+ dest="with_etw",
+ help="Build with ETW (default is true on Windows)")
+
+parser.add_option("--without-etw",
+ action="store_true",
+ dest="without_etw",
+ help="Build without ETW")
+
# CHECKME does this still work with recent releases of V8?
parser.add_option("--gdb",
action="store_true",
@@ -126,6 +159,11 @@ parser.add_option("--dest-cpu",
dest="dest_cpu",
help="CPU architecture to build for. Valid values are: arm, ia32, x64")
+parser.add_option("--no-ifaddrs",
+ action="store_true",
+ dest="no_ifaddrs",
+ help="Use on deprecated SunOS systems that do not support ifaddrs.h")
+
(options, args) = parser.parse_args()
@@ -224,40 +262,37 @@ def host_arch():
def target_arch():
return host_arch()
-def cc_version():
- try:
- proc = subprocess.Popen([CC, '-v'], stderr=subprocess.PIPE)
- except OSError:
- return None
- lines = proc.communicate()[1].split('\n')
- version_line = None
- for i, line in enumerate(lines):
- if 'version' in line:
- version_line = line
- if not version_line:
- return None
- version = version_line.split("version")[1].strip().split()[0].split(".")
- if not version:
- return None
- return ['LLVM' in version_line] + version
+
+def compiler_version():
+ proc = subprocess.Popen(CC.split() + ['--version'], stdout=subprocess.PIPE)
+ is_clang = 'clang' in proc.communicate()[0].split('\n')[0]
+
+ proc = subprocess.Popen(CC.split() + ['-dumpversion'], stdout=subprocess.PIPE)
+ version = tuple(map(int, proc.communicate()[0].split('.')))
+
+ return (version, is_clang)
+
def configure_node(o):
# TODO add gdb
- o['variables']['node_prefix'] = options.prefix if options.prefix else ''
+ o['variables']['node_prefix'] = os.path.expanduser(options.prefix or '')
o['variables']['node_install_npm'] = b(not options.without_npm)
o['variables']['node_install_waf'] = b(not options.without_waf)
o['variables']['host_arch'] = host_arch()
o['variables']['target_arch'] = options.dest_cpu or target_arch()
o['default_configuration'] = 'Debug' if options.debug else 'Release'
+ cc_version, is_clang = compiler_version()
+
# turn off strict aliasing if gcc < 4.6.0 unless it's llvm-gcc
# see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45883
# see http://code.google.com/p/v8/issues/detail?id=884
- o['variables']['strict_aliasing'] = b(
- 'clang' in CC or cc_version() >= [False, 4, 6, 0])
+ no_strict_aliasing = int(not(is_clang or cc_version >= (4,6,0)))
+ o['variables']['v8_no_strict_aliasing'] = no_strict_aliasing
+ o['variables']['node_no_strict_aliasing'] = no_strict_aliasing
# clang has always supported -fvisibility=hidden, right?
- if 'clang' not in CC and cc_version() < [False, 4, 0, 0]:
+ if not is_clang and cc_version < (4,0,0):
o['variables']['visibility'] = ''
# By default, enable DTrace on SunOS systems. Don't allow it on other
@@ -272,6 +307,17 @@ def configure_node(o):
else:
o['variables']['node_use_dtrace'] = 'false'
+ if options.no_ifaddrs:
+ o['defines'] += ['SUNOS_NO_IFADDRS']
+
+ # By default, enable ETW on Windows.
+ if sys.platform.startswith('win32'):
+ o['variables']['node_use_etw'] = b(not options.without_etw);
+ elif b(options.with_etw) == 'true':
+ raise Exception('ETW is only supported on Windows.')
+ else:
+ o['variables']['node_use_etw'] = 'false'
+
def configure_libz(o):
o['variables']['node_shared_zlib'] = b(options.shared_zlib)
@@ -300,11 +346,11 @@ def configure_v8(o):
o['libraries'] += ['-lv8']
if options.shared_v8_includes:
o['include_dirs'] += [options.shared_v8_includes]
- o['variables']['node_shared_v8_includes'] = options.shared_v8_includes
def configure_openssl(o):
o['variables']['node_use_openssl'] = b(not options.without_ssl)
+ o['variables']['node_shared_openssl'] = b(options.shared_openssl)
if options.without_ssl:
return
@@ -312,25 +358,23 @@ def configure_openssl(o):
if options.no_ssl2:
o['defines'] += ['OPENSSL_NO_SSL2=1']
- if not options.openssl_use_sys:
- o['variables']['node_use_system_openssl'] = b(False)
- else:
- out = pkg_config('openssl')
- (libs, cflags) = out if out else ('', '')
+ if options.shared_openssl:
+ (libs, cflags) = pkg_config('openssl') or ('-lssl -lcrypto', '')
- if options.openssl_libpath:
- o['libraries'] += ['-L%s' % options.openssl_libpath, '-lssl', '-lcrypto']
+ if options.shared_openssl_libpath:
+ o['libraries'] += ['-L%s' % options.shared_openssl_libpath]
+
+ if options.shared_openssl_libname:
+ libnames = options.shared_openssl_libname.split(',')
+ o['libraries'] += ['-l%s' % s for s in libnames]
else:
o['libraries'] += libs.split()
- if options.openssl_includes:
- o['include_dirs'] += [options.openssl_includes]
+ if options.shared_openssl_includes:
+ o['include_dirs'] += [options.shared_openssl_includes]
else:
o['cflags'] += cflags.split()
- o['variables']['node_use_system_openssl'] = b(
- libs or cflags or options.openssl_libpath or options.openssl_includes)
-
output = {
'variables': {},
@@ -368,7 +412,8 @@ write('config.mk', "# Do not edit. Generated by the configure script.\n" +
("BUILDTYPE=%s\n" % ('Debug' if options.debug else 'Release')))
if os.name == 'nt':
- subprocess.call(['python', 'tools/gyp_node', '-f', 'msvs',
- '-G', 'msvs_version=2010'])
+ gyp_args = ['-f', 'msvs', '-G', 'msvs_version=2010']
else:
- subprocess.call(['tools/gyp_node', '-f', 'make'])
+ gyp_args = ['-f', 'make']
+
+subprocess.call([sys.executable, 'tools/gyp_node'] + gyp_args)
diff --git a/deps/npm/.npmignore b/deps/npm/.npmignore
index 94dc33f0040..b1d9066bde9 100644
--- a/deps/npm/.npmignore
+++ b/deps/npm/.npmignore
@@ -1,16 +1,16 @@
*.swp
-test/bin
-test/output.log
-test/packages/*/node_modules
-test/packages/npm-test-depends-on-spark/which-spark.log
-test/packages/test-package/random-data.txt
-test/root
-node_modules/ronn
-node_modules/.bin
npm-debug.log
-./npmrc
-.gitignore
-release/
+/test/bin
+/test/output.log
+/test/packages/*/node_modules
+/test/packages/npm-test-depends-on-spark/which-spark.log
+/test/packages/test-package/random-data.txt
+/test/root
+/node_modules/ronn
+/node_modules/tap
+/node_modules/.bin
+/npmrc
+/release/
# don't need these in the npm package.
html/*.png
diff --git a/deps/npm/AUTHORS b/deps/npm/AUTHORS
index a2b8141d701..175196a675c 100644
--- a/deps/npm/AUTHORS
+++ b/deps/npm/AUTHORS
@@ -65,3 +65,7 @@ Jens Grunert
Joost-Wim Boekesteijn
Dalmais Maxence
Marcus Ekwall
+Aaron Stacy
+Phillip Howell
+Domenic Denicola
+James Halliday
diff --git a/deps/npm/LICENSE b/deps/npm/LICENSE
index c94425929d3..3702d8a05b6 100644
--- a/deps/npm/LICENSE
+++ b/deps/npm/LICENSE
@@ -49,6 +49,11 @@ and are not covered by this license.
"npm Logo" created by Mathias Pettersson and Brian Hammond,
used with permission.
+"Gubblebum Blocky" font
+Copyright (c) 2007 by Tjarda Koster, http://jelloween.deviantart.com
+included for use in the npm website and documentation,
+used with permission.
+
This program uses "node-uuid", Copyright (c) 2010 Robert Kieffer,
according to the terms of the MIT license.
diff --git a/deps/npm/Makefile b/deps/npm/Makefile
index 19efd815c92..2663075c652 100644
--- a/deps/npm/Makefile
+++ b/deps/npm/Makefile
@@ -121,6 +121,8 @@ docpublish: doc-publish
doc-publish: doc
rsync -vazu --stats --no-implied-dirs --delete html/doc/ npmjs.org:/var/www/npmjs.org/public/doc
rsync -vazu --stats --no-implied-dirs --delete html/api/ npmjs.org:/var/www/npmjs.org/public/api
+ rsync -vazu --stats --no-implied-dirs --delete html/webfonts/ npmjs.org:/var/www/npmjs.org/public/webfonts
+ scp html/style.css npmjs.org:/var/www/npmjs.org/public/
zip-publish: release
scp release/* npmjs.org:/var/www/npmjs.org/public/dist/
diff --git a/deps/npm/README.md b/deps/npm/README.md
index d5e285ab0e6..1257f147168 100644
--- a/deps/npm/README.md
+++ b/deps/npm/README.md
@@ -89,21 +89,15 @@ To install the latest **unstable** development version from git:
git clone https://github.com/isaacs/npm.git
cd npm
- git submodule update --init --recursive
sudo make install # (or: `node cli.js install -gf`)
If you're sitting in the code folder reading this document in your
terminal, then you've already got the code. Just do:
- git submodule update --init --recursive
sudo make install
and npm will install itself.
-Note that github tarballs **do not contain submodules**, so
-those won't work. You'll have to also fetch the appropriate submodules
-listed in the .gitmodules file.
-
## Permissions when Using npm to Install Other Stuff
**tl;dr**
diff --git a/deps/npm/bin/npm b/deps/npm/bin/npm
index 5fbcd3b035c..07ade35e08a 100755
--- a/deps/npm/bin/npm
+++ b/deps/npm/bin/npm
@@ -1,6 +1,13 @@
#!/bin/sh
-if [ -x "`dirname "$0"`/node.exe" ]; then
- "`dirname "$0"`/node.exe" "`dirname "$0"`/node_modules/npm/bin/npm-cli.js" "$@"
+
+basedir=`dirname "$0"`
+
+case `uname` in
+ *CYGWIN*) basedir=`cygpath -w "$basedir"`;;
+esac
+
+if [ -x "$basedir/node.exe" ]; then
+ "$basedir/node.exe" "$basedir/node_modules/npm/bin/npm-cli.js" "$@"
else
- node "`dirname "$0"`/node_modules/npm/bin/npm-cli.js" "$@"
+ node "$basedir/node_modules/npm/bin/npm-cli.js" "$@"
fi
diff --git a/deps/npm/bin/npm-cli.js b/deps/npm/bin/npm-cli.js
index f29437093e5..a71985b37b9 100755
--- a/deps/npm/bin/npm-cli.js
+++ b/deps/npm/bin/npm-cli.js
@@ -15,9 +15,9 @@ if (typeof WScript !== "undefined") {
process.title = "npm"
-var log = require("../lib/utils/log.js")
-log.waitForConfig()
-log.info("ok", "it worked if it ends with")
+var log = require("npmlog")
+log.pause() // will be unpaused when config is loaded.
+log.info("it worked if it ends with", "ok")
var fs = require("graceful-fs")
, path = require("path")
@@ -36,7 +36,7 @@ if (path.basename(process.argv[1]).slice(-1) === "g") {
process.argv.splice(1, 1, "npm", "-g")
}
-log.verbose(process.argv, "cli")
+log.verbose("cli", process.argv)
var conf = nopt(types, shorthands)
npm.argv = conf.argv.remain
@@ -56,8 +56,8 @@ if (conf.versions) {
return
}
-log.info("npm@"+npm.version, "using")
-log.info("node@"+process.version, "using")
+log.info("using", "npm@%s", npm.version)
+log.info("using", "node@%s", process.version)
// make sure that this version of node works with this version of npm.
var semver = require("semver")
diff --git a/deps/npm/bin/read-package-json.js b/deps/npm/bin/read-package-json.js
index 8c95d86e8b1..3e5a0c77f25 100755
--- a/deps/npm/bin/read-package-json.js
+++ b/deps/npm/bin/read-package-json.js
@@ -6,7 +6,7 @@ if (argv.length < 3) {
var fs = require("fs")
, file = argv[2]
- , readJson = require("../lib/utils/read-json")
+ , readJson = require("read-package-json")
readJson(file, function (er, data) {
if (er) throw er
diff --git a/deps/npm/doc/cli/coding-style.md b/deps/npm/doc/cli/coding-style.md
index 42ac1d785f8..c505dba83f3 100644
--- a/deps/npm/doc/cli/coding-style.md
+++ b/deps/npm/doc/cli/coding-style.md
@@ -129,29 +129,18 @@ Just send the error message back as the first argument to the callback.
Always create a new Error object with your message. Don't just return a
string message to the callback. Stack traces are handy.
-Use the `require("./utils/log").er` function. It takes a callback and an
-error message, and returns an object that will report the message in the
-event of a failure. It's quite handy.
-
- function myThing (args, cb) {
- getData(args, function (er, data) {
- if (er) return log.er(cb, "Couldn't get data")(er)
- doSomethingElse(data, cb)
- })
- }
- function justHasToWork (cb) {
- doSomething(log.er(cb, "the doSomething failed."))
- }
-
## Logging
+Logging is done using the [npmlog](https://github.com/isaacs/npmlog)
+utility.
+
Please clean up logs when they are no longer helpful. In particular,
logging the same object over and over again is not helpful. Logs should
report what's happening so that it's easier to track down where a fault
occurs.
-Use appropriate log levels. The default log() function logs at the
-"info" level. See `npm-config(1)` and search for "loglevel".
+Use appropriate log levels. See `npm-config(1)` and search for
+"loglevel".
## Case, naming, etc.
diff --git a/deps/npm/doc/cli/config.md b/deps/npm/doc/cli/config.md
index 3fd9cb82699..537af5ca0ea 100644
--- a/deps/npm/doc/cli/config.md
+++ b/deps/npm/doc/cli/config.md
@@ -117,6 +117,7 @@ The following shorthands are parsed on the command-line:
* `-S`: `--save`
* `-D`: `--save-dev`
* `-O`: `--save-optional`
+* `-B`: `--save-bundle`
* `-y`: `--yes`
* `-n`: `--yes false`
* `ll` and `la` commands: `ls --long`
@@ -167,32 +168,6 @@ then the user could change the behavior by doing:
Force npm to always require authentication when accessing the registry,
even for `GET` requests.
-### bin-publish
-
-* Default: false
-* Type: Boolean
-
-If set to true, then binary packages will be created on publish.
-
-This is the way to opt into the "bindist" behavior described below.
-
-### bindist
-
-* Default: Unstable node versions, `null`, otherwise
- `"--"`
-* Type: String or `null`
-
-Experimental: on stable versions of node, binary distributions will be
-created with this tag. If a user then installs that package, and their
-`bindist` tag is found in the list of binary distributions, they will
-get that prebuilt version.
-
-Pre-build node packages have their preinstall, install, and postinstall
-scripts stripped (since they are run prior to publishing), and do not
-have their `build` directories automatically ignored.
-
-It's yet to be seen if this is a good idea.
-
### browser
* Default: OS X: `"open"`, others: `"google-chrome"`
@@ -220,6 +195,27 @@ See also the `strict-ssl` config.
The location of npm's cache directory. See `npm-cache(1)`
+### cache-lock-stale
+
+* Default: 60000 (1 minute)
+* Type: Number
+
+The number of ms before cache folder lockfiles are considered stale.
+
+### cache-lock-retries
+
+* Default: 10
+* Type: Number
+
+Number of times to retry to acquire a lock on cache folder lockfiles.
+
+### cache-lock-wait
+
+* Default: 10000 (10 seconds)
+* Type: Number
+
+Number of ms to wait for cache lock files to expire.
+
### cache-max
* Default: Infinity
@@ -291,6 +287,15 @@ set.
The command to run for `npm edit` or `npm config edit`.
+### engine-strict
+
+* Default: false
+* Type: Boolean
+
+If set to true, then npm will stubbornly refuse to install (or even
+consider installing) any package that claims to not be compatible with
+the current Node.js version.
+
### force
* Default: false
@@ -303,6 +308,38 @@ Makes various commands more forceful.
* skips cache when requesting from the registry.
* prevents checks against clobbering non-npm files.
+### fetch-retries
+
+* Default: 2
+* Type: Number
+
+The "retries" config for the `retry` module to use when fetching
+packages from the registry.
+
+### fetch-retry-factor
+
+* Default: 10
+* Type: Number
+
+The "factor" config for the `retry` module to use when fetching
+packages.
+
+### fetch-retry-mintimeout
+
+* Default: 10000 (10 seconds)
+* Type: Number
+
+The "minTimeout" config for the `retry` module to use when fetching
+packages.
+
+### fetch-retry-maxtimeout
+
+* Default: 60000 (1 minute)
+* Type: Number
+
+The "maxTimeout" config for the `retry` module to use when fetching
+packages.
+
### git
* Default: `"git"`
@@ -375,6 +412,16 @@ Sets a User-Agent to the request header
A white-space separated list of glob patterns of files to always exclude
from packages when building tarballs.
+### init-module
+
+* Default: ~/.npm-init.js
+* Type: path
+
+A module that will be loaded by the `npm init` command. See the
+documentation for the
+[init-package-json](https://github.com/isaacs/init-package-json) module
+for more information, or npm-init(1).
+
### init.version
* Default: "0.0.0"
@@ -430,13 +477,6 @@ if one of the two conditions are met:
* the globally installed version is identical to the version that is
being installed locally.
-### logfd
-
-* Default: stderr file descriptor
-* Type: Number or Stream
-
-The location to write log output.
-
### loglevel
* Default: "http"
@@ -449,13 +489,17 @@ What level of logs to report. On failure, *all* logs are written to
Any logs of a higher level than the setting are shown.
The default is "http", which shows http, warn, and error output.
-### logprefix
+### logstream
-* Default: true on Posix, false on Windows
-* Type: Boolean
+* Default: process.stderr
+* Type: Stream
+
+This is the stream that is passed to the
+[npmlog](https://github.com/isaacs/npmlog) module at run time.
-Whether or not to prefix log messages with "npm" and the log level. See
-also "color" and "loglevel".
+It cannot be set from the command line, but if you are using npm
+programmatically, you may wish to send logs to somewhere other than
+stderr.
### long
@@ -503,13 +547,6 @@ The url to report npat test results.
A node module to `require()` when npm loads. Useful for programmatic
usage.
-### outfd
-
-* Default: standard output file descriptor
-* Type: Number or Stream
-
-Where to write "normal" output. This has no effect on log output.
-
### parseable
* Default: false
@@ -584,8 +621,23 @@ Remove failed installs.
Save installed packages to a package.json file as dependencies.
+When used with the `npm rm` command, it removes it from the dependencies
+hash.
+
Only works if there is already a package.json file present.
+### save-bundle
+
+* Default: false
+* Type: Boolean
+
+If a package would be saved at install time by the use of `--save`,
+`--save-dev`, or `--save-optional`, then also put it in the
+`bundleDependencies` list.
+
+When used with the `npm rm` command, it removes it from the
+bundledDependencies list.
+
### save-dev
* Default: false
@@ -593,6 +645,9 @@ Only works if there is already a package.json file present.
Save installed packages to a package.json file as devDependencies.
+When used with the `npm rm` command, it removes it from the devDependencies
+hash.
+
Only works if there is already a package.json file present.
### save-optional
@@ -602,6 +657,9 @@ Only works if there is already a package.json file present.
Save installed packages to a package.json file as optionalDependencies.
+When used with the `npm rm` command, it removes it from the devDependencies
+hash.
+
Only works if there is already a package.json file present.
### searchopts
diff --git a/deps/npm/doc/cli/init.md b/deps/npm/doc/cli/init.md
index 39297b4c4d6..d036f924db2 100644
--- a/deps/npm/doc/cli/init.md
+++ b/deps/npm/doc/cli/init.md
@@ -20,5 +20,6 @@ without a really good reason to do so.
## SEE ALSO
+*
* npm-json(1)
* npm-version(1)
diff --git a/deps/npm/doc/cli/install.md b/deps/npm/doc/cli/install.md
index cfa95e72297..1d2f6eca8f8 100644
--- a/deps/npm/doc/cli/install.md
+++ b/deps/npm/doc/cli/install.md
@@ -160,7 +160,7 @@ local copy exists on disk.
npm install sax --force
The `--global` argument will cause npm to install the package globally
-rather than locally. See `npm-global(1)`.
+rather than locally. See `npm-folders(1)`.
The `--link` argument will cause npm to link global installs into the
local space in some cases.
diff --git a/deps/npm/doc/cli/json.md b/deps/npm/doc/cli/json.md
index ddd500e3b12..b6bf89ca37f 100644
--- a/deps/npm/doc/cli/json.md
+++ b/deps/npm/doc/cli/json.md
@@ -394,6 +394,7 @@ Git urls can be of the form:
git://github.com/user/project.git#commit-ish
git+ssh://user@hostname:project.git#commit-ish
+ git+ssh://user@hostname/project.git#commit-ish
git+http://user@hostname/project/blah.git#commit-ish
git+https://user@hostname/project/blah.git#commit-ish
@@ -420,10 +421,39 @@ Array of package names that will be bundled when publishing the package.
If this is spelled `"bundleDependencies"`, then that is also honorable.
+## optionalDependencies
+
+If a dependency can be used, but you would like npm to proceed if it
+cannot be found or fails to install, then you may put it in the
+`optionalDependencies` hash. This is a map of package name to version
+or url, just like the `dependencies` hash. The difference is that
+failure is tolerated.
+
+It is still your program's responsibility to handle the lack of the
+dependency. For example, something like this:
+
+ try {
+ var foo = require('foo')
+ var fooVersion = require('foo/package.json').version
+ } catch (er) {
+ foo = null
+ }
+ if ( notGoodFooVersion(fooVersion) ) {
+ foo = null
+ }
+
+ // .. then later in your program ..
+
+ if (foo) {
+ foo.doFooThings()
+ }
+
+Entries in `optionalDependencies` will override entries of the same name in
+`dependencies`, so it's usually best to only put in one place.
+
## engines
-You can specify the version of
-node that your stuff works on:
+You can specify the version of node that your stuff works on:
{ "engines" : { "node" : ">=0.1.27 <0.1.30" } }
@@ -439,6 +469,22 @@ are capable of properly installing your program. For example:
{ "engines" : { "npm" : "~1.0.20" } }
+Note that, unless the user has set the `engine-strict` config flag, this
+field is advisory only.
+
+## engineStrict
+
+If you are sure that your module will *definitely not* run properly on
+versions of Node/npm other than those specified in the `engines` hash,
+then you can set `"engineStrict": true` in your package.json file.
+This will override the user's `engine-strict` config setting.
+
+Please do not do this unless you are really very very sure. If your
+engines hash is something overly restrictive, you can quite easily and
+inadvertently lock yourself into obscurity and prevent your users from
+updating to new versions of Node. Consider this choice carefully. If
+people abuse it, it will be removed in a future version of npm.
+
## os
You can specify which operating systems your
diff --git a/deps/npm/html/api/GubbleBum-Blocky.ttf b/deps/npm/html/api/GubbleBum-Blocky.ttf
deleted file mode 100755
index 8eac02f7ada..00000000000
Binary files a/deps/npm/html/api/GubbleBum-Blocky.ttf and /dev/null differ
diff --git a/deps/npm/html/api/author.html b/deps/npm/html/api/author.html
deleted file mode 100644
index 0625fbc183e..00000000000
--- a/deps/npm/html/api/author.html
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
- author
-
-
-
-
-
The first element of the 'args' parameter defines what to do, and the subsequent
-elements depend on the action. Possible values for the action are (order of
-parameters are given in parenthesis):
-
-
ls (package):
-List all the users who have access to modify a package and push new versions.
-Handy when you need to know who to bug for help.
add (user, package):
-Add a new user as a maintainer of a package. This user is enabled to modify
-metadata, publish new versions, and add other owners.
rm (user, package):
-Remove a user from the package owner list. This immediately revokes their
-privileges.
-
-
Note that there is only one level of access. Either you can modify a package,
-or you can't. Future versions may contain more fine-grained access levels, but
-that is not implemented at this time.
npm.commands.config(args, callback)
-var val = npm.config.get(key)
-npm.config.set(key, val)
-
-
DESCRIPTION
-
-
This function acts much the same way as the command-line version. The first
-element in the array tells config what to do. Possible values are:
-
-
set
Sets a config parameter. The second element in args is interpreted as the
-key, and the third element is interpreted as the value.
get
Gets the value of a config parameter. The second element in args is the
-key to get the value of.
delete (rm or del)
Deletes a parameter from the config. The second element in args is the
-key to delete.
list (ls)
Show all configs that aren't secret. No parameters necessary.
edit:
Opens the config file in the default editor. This command isn't very useful
-programmatically, but it is made available.
-
-
To programmatically access npm configuration settings, or set them for
-the duration of a program, use the npm.config.set and npm.config.get
-functions instead.
npm puts various things on your computer. That's its job.
-
-
This document will tell you what it puts where.
-
-
tl;dr
-
-
Local install (default): puts stuff in ./node_modules of the current
-package root.
Global install (with -g): puts stuff in /usr/local or wherever node
-is installed.
Install it locally if you're going to require() it.
Install it globally if you're going to run it on the command line.
If you need both, then install it in both places, or use npm link.
-
-
prefix Configuration
-
-
The prefix config defaults to the location where node is installed.
-On most systems, this is /usr/local, and most of the time is the same
-as node's process.installPrefix.
-
-
On windows, this is the exact location of the node.exe binary. On Unix
-systems, it's one level up, since node is typically installed at
-{prefix}/bin/node rather than {prefix}/node.exe.
-
-
When the global flag is set, npm installs things into this prefix.
-When it is not set, it uses the root of the current package, or the
-current working directory if not in a package already.
-
-
Node Modules
-
-
Packages are dropped into the node_modules folder under the prefix.
-When installing locally, this means that you can
-require("packagename") to load its main module, or
-require("packagename/lib/path/to/sub/module") to load other modules.
-
-
Global installs on Unix systems go to {prefix}/lib/node_modules.
-Global installs on Windows go to {prefix}/node_modules (that is, no
-lib folder.)
-
-
If you wish to require() a package, then install it locally.
-
-
Executables
-
-
When in global mode, executables are linked into {prefix}/bin on Unix,
-or directly into {prefix} on Windows.
-
-
When in local mode, executables are linked into
-./node_modules/.bin so that they can be made available to scripts run
-through npm. (For example, so that a test runner will be in the path
-when you run npm test.)
-
-
Man Pages
-
-
When in global mode, man pages are linked into {prefix}/share/man.
-
-
When in local mode, man pages are not installed.
-
-
Man pages are not installed on Windows systems.
-
-
Cache
-
-
See cache(1). Cache files are stored in ~/.npm on Posix, or
-~/npm-cache on Windows.
-
-
This is controlled by the cache configuration param.
-
-
Temp Files
-
-
Temporary files are stored by default in the folder specified by the
-tmp config, which defaults to the TMPDIR, TMP, or TEMP environment
-variables, or /tmp on Unix and c:\windows\temp on Windows.
-
-
Temp files are given a unique folder under this root for each run of the
-program, and are deleted upon successful exit.
-
-
More Information
-
-
When installing locally, npm first tries to find an appropriate
-prefix folder. This is so that npm install foo@1.2.3 will install
-to the sensible root of your package, even if you happen to have cded
-into some other folder.
-
-
Starting at the $PWD, npm will walk up the folder tree checking for a
-folder that contains either a package.json file, or a node_modules
-folder. If such a thing is found, then that is treated as the effective
-"current directory" for the purpose of running npm commands. (This
-behavior is inspired by and similar to git's .git-folder seeking
-logic when running git commands in a working dir.)
-
-
If no package root is found, then the current folder is used.
-
-
When you run npm install foo@1.2.3, then the package is loaded into
-the cache, and then unpacked into ./node_modules/foo. Then, any of
-foo's dependencies are similarly unpacked into
-./node_modules/foo/node_modules/....
-
-
Any bin files are symlinked to ./node_modules/.bin/, so that they may
-be found by npm scripts when necessary.
-
-
Global Installation
-
-
If the global configuration is set to true, then npm will
-install packages "globally".
-
-
For global installation, packages are installed roughly the same way,
-but using the folders described above.
-
-
Cycles, Conflicts, and Folder Parsimony
-
-
Cycles are handled using the property of node's module system that it
-walks up the directories looking for node_modules folders. So, at every
-stage, if a package is already installed in an ancestor node_modules
-folder, then it is not installed at the current location.
-
-
Consider the case above, where foo -> bar -> baz. Imagine if, in
-addition to that, baz depended on bar, so you'd have:
-foo -> bar -> baz -> bar -> baz .... However, since the folder
-structure is: foo/node_modules/bar/node_modules/baz, there's no need to
-put another copy of bar into .../baz/node_modules, since when it calls
-require("bar"), it will get the copy that is installed in
-foo/node_modules/bar.
-
-
This shortcut is only used if the exact same
-version would be installed in multiple nested node_modules folders. It
-is still possible to have a/node_modules/b/node_modules/a if the two
-"a" packages are different versions. However, without repeating the
-exact same package multiple times, an infinite regress will always be
-prevented.
-
-
Another optimization can be made by installing dependencies at the
-highest level possible, below the localized "target" folder.
Since foo depends directly on bar@1.2.3 and baz@1.2.3, those are
-installed in foo's node_modules folder.
-
-
Even though the latest copy of blerg is 1.3.7, foo has a specific
-dependency on version 1.2.5. So, that gets installed at [A]. Since the
-parent installation of blerg satisfie's bar's dependency on blerg@1.x,
-it does not install another copy under [B].
-
-
Bar [B] also has dependencies on baz and asdf, so those are installed in
-bar's node_modules folder. Because it depends on baz@2.x, it cannot
-re-use the baz@1.2.3 installed in the parent node_modules folder [D],
-and must install its own copy [C].
-
-
Underneath bar, the baz->quux->bar dependency creates a cycle.
-However, because bar is already in quux's ancestry [B], it does not
-unpack another copy of bar into that folder.
-
-
Underneath foo->baz [D], quux's [E] folder tree is empty, because its
-dependency on bar is satisfied by the parent folder copy installed at [B].
-
-
For a graphical breakdown of what is installed where, use npm ls.
-
-
Publishing
-
-
Upon publishing, npm will look in the node_modules folder. If any of
-the items there are not in the bundledDependencies array, then they will
-not be included in the package tarball.
-
-
This allows a package maintainer to install all of their dependencies
-(and dev dependencies) locally, but only re-publish those items that
-cannot be found elsewhere. See json(1) for more information.
-
+
diff --git a/doc/api/_toc.markdown b/doc/api/_toc.markdown
index 0e90fe6c765..d8da1740e2e 100644
--- a/doc/api/_toc.markdown
+++ b/doc/api/_toc.markdown
@@ -23,6 +23,7 @@
* [HTTPS](https.html)
* [URL](url.html)
* [Query Strings](querystring.html)
+* [Punycode](punycode.html)
* [Readline](readline.html)
* [REPL](repl.html)
* [VM](vm.html)
diff --git a/doc/api/all.markdown b/doc/api/all.markdown
index c62526713e4..044c0aee974 100644
--- a/doc/api/all.markdown
+++ b/doc/api/all.markdown
@@ -23,6 +23,7 @@
@include https
@include url
@include querystring
+@include punycode
@include readline
@include repl
@include vm
diff --git a/doc/api/buffer.markdown b/doc/api/buffer.markdown
index 82a36eedc6f..ff95304ed6f 100644
--- a/doc/api/buffer.markdown
+++ b/doc/api/buffer.markdown
@@ -148,6 +148,26 @@ Example:
// ½ + ¼ = ¾: 9 characters, 12 bytes
+### Class Method: Buffer.concat(list, [totalLength])
+
+* `list` {Array} List of Buffer objects to concat
+* `totalLength` {Number} Total length of the buffers when concatenated
+
+Returns a buffer which is the result of concatenating all the buffers in
+the list together.
+
+If the list has no items, or if the totalLength is 0, then it returns a
+zero-length buffer.
+
+If the list has exactly one item, then the first item of the list is
+returned.
+
+If the list has more than one item, then a new Buffer is created.
+
+If totalLength is not provided, it is read from the buffers in the list.
+However, this adds an additional loop to the function, so it is faster
+to provide the length explicitly.
+
### buf.length
* Number
diff --git a/doc/api/child_process.markdown b/doc/api/child_process.markdown
index e9aab649f4b..99f72db7f67 100644
--- a/doc/api/child_process.markdown
+++ b/doc/api/child_process.markdown
@@ -14,7 +14,7 @@ different, and explained below.
## Class: ChildProcess
-`ChildProcess` is an `EventEmitter`.
+`ChildProcess` is an [EventEmitter][].
Child processes always have three streams associated with them. `child.stdin`,
`child.stdout`, and `child.stderr`. These may be shared with the stdio
@@ -249,7 +249,7 @@ there is no IPC channel keeping it alive. When calling this method the
* `customFds` {Array} **Deprecated** File descriptors for the child to use
for stdio. (See below)
* `env` {Object} Environment key-value pairs
- * `setsid` {Boolean}
+ * `detached` {Boolean} The child will be a process group leader. (See below)
* return: {ChildProcess object}
Launches a new process with the given `command`, with command line arguments in `args`.
@@ -340,22 +340,31 @@ API.
The 'stdio' option to `child_process.spawn()` is an array where each
index corresponds to a fd in the child. The value is one of the following:
-1. `null`, `undefined` - Use default value. For 0,1,2 stdios this is the same
- as `'pipe'`. For any higher value, `'ignore'`
-2. `'ignore'` - Open the fd in the child, but do not expose it to the parent
-3. `'pipe'` - Open the fd and expose as a `Stream` object to parent.
-4. `'ipc'` - Create IPC channel for passing messages/file descriptors between
- parent and child.
-
- Note: A ChildProcess may have at most *one* IPC stdio file descriptor.
- Setting this option enables the ChildProcess.send() method. If the
- child writes JSON messages to this file descriptor, then this will trigger
- ChildProcess.on('message'). If the child is a Node.js program, then
- the presence of an IPC channel will enable process.send() and
- process.on('message')
-5. positive integer - Share corresponding fd with child
-6. Any TTY, TCP, File stream (or any object with `fd` property) - Share
- corresponding stream with child.
+1. `'pipe'` - Create a pipe between the child process and the parent process.
+ The parent end of the pipe is exposed to the parent as a property on the
+ `child_process` object as `ChildProcess.stdio[fd]`. Pipes created for
+ fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout
+ and ChildProcess.stderr, respectively.
+2. `'ipc'` - Create an IPC channel for passing messages/file descriptors
+ between parent and child. A ChildProcess may have at most *one* IPC stdio
+ file descriptor. Setting this option enables the ChildProcess.send() method.
+ If the child writes JSON messages to this file descriptor, then this will
+ trigger ChildProcess.on('message'). If the child is a Node.js program, then
+ the presence of an IPC channel will enable process.send() and
+ process.on('message').
+3. `'ignore'` - Do not set this file descriptor in the child. Note that Node
+ will always open fd 0 - 2 for the processes it spawns. When any of these is
+ ignored node will open `/dev/null` and attach it to the child's fd.
+4. `Stream` object - Share a readable or writable stream that refers to a tty,
+ file, socket, or a pipe with the child process. The stream's underlying
+ file descriptor is duplicated in the child process to the fd that
+ corresponds to the index in the `stdio` array.
+5. Positive integer - The integer value is interpreted as a file descriptor
+ that is is currently open in the parent process. It is shared with the child
+ process, similar to how `Stream` objects can be shared.
+6. `null`, `undefined` - Use default value. For stdio fds 0, 1 and 2 (in other
+ words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the
+ default is `'ignore'`.
As a shorthand, the `stdio` argument may also be one of the following
strings, rather than an array:
@@ -378,6 +387,34 @@ Example:
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });
+If the `detached` option is set, the child process will be made the leader of a
+new process group. This makes it possible for the child to continue running
+after the parent exits.
+
+By default, the parent will wait for the detached child to exit. To prevent
+the parent from waiting for a given `child`, use the `child.unref()` method,
+and the parent's event loop will not include the child in its reference count.
+
+Example of detaching a long-running process and redirecting its output to a
+file:
+
+ var fs = require('fs'),
+ spawn = require('child_process').spawn,
+ out = fs.openSync('./out.log', 'a'),
+ err = fs.openSync('./out.log', 'a');
+
+ var child = spawn('prg', [], {
+ detached: true,
+ stdio: [ 'ignore', out, err ]
+ });
+
+ child.unref();
+
+When using the `detached` option to start a long-running process, the process
+will not stay running in the background unless it is provided with a `stdio`
+configuration that is not connected to the parent. If the parent's `stdio` is
+inherited, the child will remain attached to the controlling terminal.
+
There is a deprecated option called `customFds` which allows one to specify
specific file descriptors for the stdio of the child process. This API was
not portable to all platforms and therefore removed.
@@ -400,7 +437,6 @@ See also: `child_process.exec()` and `child_process.fork()`
* `customFds` {Array} **Deprecated** File descriptors for the child to use
for stdio. (See above)
* `env` {Object} Environment key-value pairs
- * `setsid` {Boolean}
* `encoding` {String} (Default: 'utf8')
* `timeout` {Number} (Default: 0)
* `maxBuffer` {Number} (Default: 200*1024)
@@ -458,10 +494,9 @@ the child process is killed.
* `customFds` {Array} **Deprecated** File descriptors for the child to use
for stdio. (See above)
* `env` {Object} Environment key-value pairs
- * `setsid` {Boolean}
* `encoding` {String} (Default: 'utf8')
* `timeout` {Number} (Default: 0)
- * `maxBuffer` {Number} (Default: 200*1024)
+ * `maxBuffer` {Number} (Default: 200\*1024)
* `killSignal` {String} (Default: 'SIGTERM')
* `callback` {Function} called with the output when process terminates
* `error` {Error}
@@ -474,14 +509,13 @@ subshell but rather the specified file directly. This makes it slightly
leaner than `child_process.exec`. It has the same options.
-## child_process.fork(modulePath, [args], [options])
+## child\_process.fork(modulePath, [args], [options])
* `modulePath` {String} The module to run in the child
* `args` {Array} List of string arguments
* `options` {Object}
* `cwd` {String} Current working directory of the child process
* `env` {Object} Environment key-value pairs
- * `setsid` {Boolean}
* `encoding` {String} (Default: 'utf8')
* `timeout` {Number} (Default: 0)
* Return: ChildProcess object
@@ -498,3 +532,5 @@ with the parent's. To change this behavior set the `silent` property in the
These child Nodes are still whole new instances of V8. Assume at least 30ms
startup and 10mb memory for each new Node. That is, you cannot create many
thousands of them.
+
+[EventEmitter]: events.html#events_class_events_eventemitter
diff --git a/doc/api/cluster.markdown b/doc/api/cluster.markdown
index 3b67a33bfe4..bb1a77c980c 100644
--- a/doc/api/cluster.markdown
+++ b/doc/api/cluster.markdown
@@ -41,6 +41,55 @@ Running node will now share port 8000 between the workers:
This feature was introduced recently, and may change in future versions.
Please try it out and provide feedback.
+Also note that, on Windows, it is not yet possible to set up a named pipe
+server in a worker.
+
+## How It Works
+
+
+
+The worker processes are spawned using the `child_process.fork` method,
+so that they can communicate with the parent via IPC and pass server
+handles back and forth.
+
+When you call `server.listen(...)` in a worker, it serializes the
+arguments and passes the request to the master process. If the master
+process already has a listening server matching the worker's
+requirements, then it passes the handle to the worker. If it does not
+already have a listening server matching that requirement, then it will
+create one, and pass the handle to the child.
+
+This causes potentially surprising behavior in three edge cases:
+
+1. `server.listen({fd: 7})` Because the message is passed to the master,
+ file descriptor 7 **in the parent** will be listened on, and the
+ handle passed to the worker, rather than listening to the worker's
+ idea of what the number 7 file descriptor references.
+2. `server.listen(handle)` Listening on handles explicitly will cause
+ the worker to use the supplied handle, rather than talk to the master
+ process. If the worker already has the handle, then it's presumed
+ that you know what you are doing.
+3. `server.listen(0)` Normally, this will case servers to listen on a
+ random port. However, in a cluster, each worker will receive the
+ same "random" port each time they do `listen(0)`. In essence, the
+ port is random the first time, but predictable thereafter. If you
+ want to listen on a unique port, generate a port number based on the
+ cluster worker ID.
+
+When multiple processes are all `accept()`ing on the same underlying
+resource, the operating system load-balances across them very
+efficiently. There is no routing logic in Node.js, or in your program,
+and no shared state between the workers. Therefore, it is important to
+design your program such that it does not rely too heavily on in-memory
+data objects for things like sessions and login.
+
+Because workers are all separate processes, they can be killed or
+re-spawned depending on your program's needs, without affecting other
+workers. As long as there are some workers still alive, the server will
+continue to accept connections. Node does not automatically manage the
+number of workers for you, however. It is your responsibility to manage
+the worker pool for your application's needs.
+
## cluster.settings
* {Object}
@@ -82,13 +131,13 @@ This can be used to log worker activity, or to create your own timeout.
}
cluster.on('fork', function(worker) {
- timeouts[worker.uniqueID] = setTimeout(errorMsg, 2000);
+ timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', function(worker, address) {
- clearTimeout(timeouts[worker.uniqueID]);
+ clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', function(worker, code, signal) {
- clearTimeout(timeouts[worker.uniqueID]);
+ clearTimeout(timeouts[worker.id]);
errorMsg();
});
@@ -137,13 +186,13 @@ the process is stuck in a cleanup or if there are long-living
connections.
cluster.on('disconnect', function(worker) {
- console.log('The worker #' + worker.uniqueID + ' has disconnected');
+ console.log('The worker #' + worker.id + ' has disconnected');
});
## Event: 'exit'
* `worker` {Worker object}
-* `code` {Number} the exit code, if it exited normally.
+* `code` {Number} the exit code, if it exited normally.
* `signal` {String} the name of the signal (eg. `'SIGHUP'`) that caused
the process to be killed.
@@ -184,7 +233,7 @@ Example:
args : ["--use", "https"],
silent : true
});
- cluster.autoFork();
+ cluster.fork();
## cluster.fork([env])
@@ -219,13 +268,13 @@ The method takes an optional callback argument which will be called when finishe
* {Object}
-In the cluster, all living worker objects are stored in this object with their
-`uniqueID` as the key. This makes it easy to loop through all living workers.
+In the cluster, all living worker objects are stored in this object by there
+`id` as the key. This makes it easy to loop through all living workers.
// Go through all workers
function eachWorker(callback) {
- for (var uniqueID in cluster.workers) {
- callback(cluster.workers[uniqueID]);
+ for (var id in cluster.workers) {
+ callback(cluster.workers[id]);
}
}
eachWorker(function(worker) {
@@ -233,10 +282,10 @@ In the cluster, all living worker objects are stored in this object with their
});
Should you wish to reference a worker over a communication channel, using
-the worker's uniqueID is the easiest way to find the worker.
+the worker's unique id is the easiest way to find the worker.
- socket.on('data', function(uniqueID) {
- var worker = cluster.workers[uniqueID];
+ socket.on('data', function(id) {
+ var worker = cluster.workers[id];
});
## Class: Worker
@@ -245,12 +294,12 @@ A Worker object contains all public information and method about a worker.
In the master it can be obtained using `cluster.workers`. In a worker
it can be obtained using `cluster.worker`.
-### worker.uniqueID
+### worker.id
* {String}
Each new worker is given its own unique id, this id is stored in the
-`uniqueID`.
+`id`.
While a worker is alive, this is the key that indexes it in
cluster.workers
@@ -386,9 +435,13 @@ in the master process using the message system:
}
// Start workers and listen for messages containing notifyRequest
- cluster.autoFork();
- Object.keys(cluster.workers).forEach(function(uniqueID) {
- cluster.workers[uniqueID].on('message', messageHandler);
+ var numCPUs = require('os').cpus().length;
+ for (var i = 0; i < numCPUs; i++) {
+ cluster.fork();
+ }
+
+ Object.keys(cluster.workers).forEach(function(id) {
+ cluster.workers[id].on('message', messageHandler);
});
} else {
@@ -434,12 +487,12 @@ on the specified worker.
### Event: 'exit'
-* `code` {Number} the exit code, if it exited normally.
+* `code` {Number} the exit code, if it exited normally.
* `signal` {String} the name of the signal (eg. `'SIGHUP'`) that caused
the process to be killed.
Emitted by the individual worker instance, when the underlying child process
-is terminated. See [child_process event: 'exit'](child_process.html#child_process_event_exit).
+is terminated. See [child_process event: 'exit'](child_process.html#child_process_event_exit).
var worker = cluster.fork();
worker.on('exit', function(code, signal) {
diff --git a/doc/api/crypto.markdown b/doc/api/crypto.markdown
index 3537fc13b90..87b9ac37827 100644
--- a/doc/api/crypto.markdown
+++ b/doc/api/crypto.markdown
@@ -111,16 +111,19 @@ Creates and returns a cipher object, with the given algorithm and password.
`algorithm` is dependent on OpenSSL, examples are `'aes192'`, etc.
On recent releases, `openssl list-cipher-algorithms` will display the
available cipher algorithms.
-`password` is used to derive key and IV, which must be `'binary'` encoded
-string (See the [Buffer section](buffer.html) for more information).
+`password` is used to derive key and IV, which must be a `'binary'` encoded
+string or a [buffer](buffer.html).
## crypto.createCipheriv(algorithm, key, iv)
Creates and returns a cipher object, with the given algorithm, key and iv.
-`algorithm` is the same as the `createCipher()`. `key` is a raw key used in
-algorithm. `iv` is an Initialization vector. `key` and `iv` must be `'binary'`
-encoded string (See the [Buffer section](buffer.html) for more information).
+`algorithm` is the same as the argument to `createCipher()`.
+`key` is the raw key used by the algorithm.
+`iv` is an [initialization
+vector](http://en.wikipedia.org/wiki/Initialization_vector).
+
+`key` and `iv` must be `'binary'` encoded strings or [buffers](buffer.html).
## Class: Cipher
@@ -156,12 +159,12 @@ Useful for non-standard padding, e.g. using `0x0` instead of PKCS padding. You m
## crypto.createDecipher(algorithm, password)
Creates and returns a decipher object, with the given algorithm and key.
-This is the mirror of the [createCipher()](#crypto.createCipher) above.
+This is the mirror of the [createCipher()][] above.
## crypto.createDecipheriv(algorithm, key, iv)
Creates and returns a decipher object, with the given algorithm, key and iv.
-This is the mirror of the [createCipheriv()](#crypto.createCipheriv) above.
+This is the mirror of the [createCipheriv()][] above.
## Class: Decipher
@@ -313,13 +316,13 @@ or `'base64'`. Defaults to `'binary'`.
Creates a predefined Diffie-Hellman key exchange object.
The supported groups are: `'modp1'`, `'modp2'`, `'modp5'`
-(defined in [RFC 2412](http://www.rfc-editor.org/rfc/rfc2412.txt ))
+(defined in [RFC 2412][])
and `'modp14'`, `'modp15'`, `'modp16'`, `'modp17'`, `'modp18'`
-(defined in [RFC 3526](http://www.rfc-editor.org/rfc/rfc3526.txt )).
+(defined in [RFC 3526][]).
The returned object mimics the interface of objects created by
-[crypto.createDiffieHellman()](#crypto.createDiffieHellman) above, but
+[crypto.createDiffieHellman()][] above, but
will not allow to change the keys (with
-[diffieHellman.setPublicKey()](#diffieHellman.setPublicKey) for example).
+[diffieHellman.setPublicKey()][] for example).
The advantage of using this routine is that the parties don't have to
generate nor exchange group modulus beforehand, saving both processor and
communication time.
@@ -362,3 +365,10 @@ Generates cryptographically strong pseudo-random data. Usage:
} catch (ex) {
// handle error
}
+
+[createCipher()]: #crypto_crypto_createcipher_algorithm_password
+[createCipheriv()]: #crypto_crypto_createcipheriv_algorithm_key_iv
+[crypto.createDiffieHellman()]: #crypto_crypto_creatediffiehellman_prime_encoding
+[diffieHellman.setPublicKey()]: #crypto_diffiehellman_setpublickey_public_key_encoding
+[RFC 2412]: http://www.rfc-editor.org/rfc/rfc2412.txt
+[RFC 3526]: http://www.rfc-editor.org/rfc/rfc3526.txt
diff --git a/doc/api/domain.markdown b/doc/api/domain.markdown
index 5bbdcc33a0a..84b178607b4 100644
--- a/doc/api/domain.markdown
+++ b/doc/api/domain.markdown
@@ -119,7 +119,7 @@ Returns a new Domain object.
The Domain class encapsulates the functionality of routing errors and
uncaught exceptions to the active Domain object.
-Domain is a child class of EventEmitter. To handle the errors that it
+Domain is a child class of [EventEmitter][]. To handle the errors that it
catches, listen to its `error` event.
### domain.run(fn)
@@ -227,13 +227,16 @@ with a single error handler in a single place.
var d = domain.create();
function readSomeFile(filename, cb) {
- fs.readFile(filename, d.intercept(function(er, data) {
+ fs.readFile(filename, d.intercept(function(data) {
+ // note, the first argument is never passed to the
+ // callback since it is assumed to be the 'Error' argument
+ // and thus intercepted by the domain.
+
// if this throws, it will also be passed to the domain
- // additionally, we know that 'er' will always be null,
// so the error-handling logic can be moved to the 'error'
// event on the domain instead of being repeated throughout
// the program.
- return cb(er, JSON.parse(data));
+ return cb(null, JSON.parse(data));
}));
}
@@ -255,7 +258,11 @@ The intention of calling `dispose` is generally to prevent cascading
errors when a critical part of the Domain context is found to be in an
error state.
+Once the domain is disposed the `dispose` event will emit.
+
Note that IO might still be performed. However, to the highest degree
possible, once a domain is disposed, further errors from the emitters in
that set will be ignored. So, even if some remaining actions are still
in flight, Node.js will not communicate further about them.
+
+[EventEmitter]: events.html#events_class_events_eventemitter
diff --git a/doc/api/events.markdown b/doc/api/events.markdown
index fed957d3450..b9be5dc5aad 100644
--- a/doc/api/events.markdown
+++ b/doc/api/events.markdown
@@ -64,6 +64,9 @@ Remove a listener from the listener array for the specified event.
Removes all listeners, or those of the specified event.
+Note that this will **invalidate** any arrays that have previously been
+returned by `emitter.listeners(event)`.
+
### emitter.setMaxListeners(n)
@@ -75,14 +78,27 @@ that to be increased. Set to zero for unlimited.
### emitter.listeners(event)
-Returns an array of listeners for the specified event. This array can be
-manipulated, e.g. to remove listeners.
+Returns an array of listeners for the specified event.
server.on('connection', function (stream) {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection'))); // [ [Function] ]
+This array **may** be a mutable reference to the same underlying list of
+listeners that is used by the event subsystem. However, certain
+actions (specifically, removeAllListeners) will invalidate this
+reference.
+
+If you would like to get a copy of the listeners at a specific point in
+time that is guaranteed not to change, make a copy, for example by doing
+`emitter.listeners(event).slice(0)`.
+
+In a future release of node, this behavior **may** change to always
+return a copy, for consistency. In your programs, please do not rely on
+being able to modify the EventEmitter listeners using array methods.
+Always use the 'on' method to add new listeners.
+
### emitter.emit(event, [arg1], [arg2], [...])
Execute each of the listeners in order with the supplied arguments.
diff --git a/doc/api/fs.markdown b/doc/api/fs.markdown
index 0554eef5170..4c97a179439 100644
--- a/doc/api/fs.markdown
+++ b/doc/api/fs.markdown
@@ -130,6 +130,8 @@ Synchronous fchmod(2).
Asynchronous lchmod(2). No arguments other than a possible exception
are given to the completion callback.
+Only available on Mac OS X.
+
## fs.lchmodSync(path, mode)
Synchronous lchmod(2).
@@ -355,13 +357,7 @@ without waiting for the callback. For this scenario,
## fs.writeSync(fd, buffer, offset, length, position)
-Synchronous version of buffer-based `fs.write()`. Returns the number of bytes
-written.
-
-## fs.writeSync(fd, str, position, [encoding])
-
-Synchronous version of string-based `fs.write()`. `encoding` defaults to
-`'utf8'`. Returns the number of _bytes_ written.
+Synchronous version of `fs.write()`. Returns the number of bytes written.
## fs.read(fd, buffer, offset, length, position, [callback])
@@ -380,13 +376,7 @@ The callback is given the three arguments, `(err, bytesRead, buffer)`.
## fs.readSync(fd, buffer, offset, length, position)
-Synchronous version of buffer-based `fs.read`. Returns the number of
-`bytesRead`.
-
-## fs.readSync(fd, length, position, encoding)
-
-Legacy synchronous version of string-based `fs.read`. Returns an array with the
-data from the file specified and number of bytes read, `[string, bytesRead]`.
+Synchronous version of `fs.read`. Returns the number of `bytesRead`.
## fs.readFile(filename, [encoding], [callback])
@@ -456,8 +446,7 @@ The second argument is optional. The `options` if provided should be an object
containing two members a boolean, `persistent`, and `interval`. `persistent`
indicates whether the process should continue to run as long as files are
being watched. `interval` indicates how often the target should be polled,
-in milliseconds. (On Linux systems with inotify, `interval` is ignored.) The
-default is `{ persistent: true, interval: 0 }`.
+in milliseconds. The default is `{ persistent: true, interval: 5007 }`.
The `listener` gets two arguments the current stat object and the previous
stat object:
diff --git a/doc/api/globals.markdown b/doc/api/globals.markdown
index d8fb257e7a4..0ca2b0db9d9 100644
--- a/doc/api/globals.markdown
+++ b/doc/api/globals.markdown
@@ -22,7 +22,7 @@ scope; `var something` inside a Node module will be local to that module.
* {Object}
-The process object. See the [process object](process.html#process) section.
+The process object. See the [process object][] section.
## console
@@ -30,7 +30,7 @@ The process object. See the [process object](process.html#process) section.
* {Object}
-Used to print to stdout and stderr. See the [stdio](stdio.html) section.
+Used to print to stdout and stderr. See the [stdio][] section.
## Class: Buffer
@@ -38,7 +38,7 @@ Used to print to stdout and stderr. See the [stdio](stdio.html) section.
* {Function}
-Used to handle binary data. See the [buffer section](buffer.html).
+Used to handle binary data. See the [buffer section][]
## require()
@@ -46,8 +46,8 @@ Used to handle binary data. See the [buffer section](buffer.html).
* {Function}
-To require modules. See the [Modules](modules.html#modules) section.
-`require` isn't actually a global but rather local to each module.
+To require modules. See the [Modules][] section. `require` isn't actually a
+global but rather local to each module.
### require.resolve()
@@ -123,8 +123,7 @@ A reference to the current module. In particular
`module.exports` is the same as the `exports` object.
`module` isn't actually a global but rather local to each module.
-See the [module system documentation](modules.html) for more
-information.
+See the [module system documentation][] for more information.
## exports
@@ -135,16 +134,51 @@ made accessible through `require()`.
`exports` is the same as the `module.exports` object.
`exports` isn't actually a global but rather local to each module.
-See the [module system documentation](modules.html) for more
-information.
+See the [module system documentation][] for more information.
-See the [module section](modules.html) for more information.
+See the [module section][] for more information.
## setTimeout(cb, ms)
+
+Run callback `cb` after *at least* `ms` milliseconds. The actual delay depends
+on external factors like OS timer granularity and system load.
+
+The timeout must be in the range of 1-2,147,483,647 inclusive. If the value is
+outside that range, it's changed to 1 millisecond. Broadly speaking, a timer
+cannot span more than 24.8 days.
+
+Returns an opaque value that represents the timer.
+
## clearTimeout(t)
+
+Stop a timer that was previously created with `setTimeout()`. The callback will
+not execute.
+
## setInterval(cb, ms)
+
+Run callback `cb` repeatedly every `ms` milliseconds. Note that the actual
+interval may vary, depending on external factors like OS timer granularity and
+system load. It's never less than `ms` but it may be longer.
+
+The interval must be in the range of 1-2,147,483,647 inclusive. If the value is
+outside that range, it's changed to 1 millisecond. Broadly speaking, a timer
+cannot span more than 24.8 days.
+
+Returns an opaque value that represents the timer.
+
## clearInterval(t)
+Stop a timer that was previously created with `setInterval()`. The callback
+will not execute.
+
-The timer functions are global variables. See the [timers](timers.html) section.
+The timer functions are global variables. See the [timers][] section.
+
+[buffer section]: buffer.html
+[module section]: modules.html
+[module system documentation]: modules.html
+[Modules]: modules.html#modules_modules
+[process object]: process.html#process_process
+[stdio]: stdio.html
+[timers]: timers.html
diff --git a/doc/api/http.markdown b/doc/api/http.markdown
index 962aa49081f..dbd31ac2216 100644
--- a/doc/api/http.markdown
+++ b/doc/api/http.markdown
@@ -42,13 +42,13 @@ added to the `'request'` event.
## http.createClient([port], [host])
-This function is **deprecated**; please use
-[http.request()](#http_http_request_options_callback) instead. Constructs a new
-HTTP client. `port` and `host` refer to the server to be connected to.
+This function is **deprecated**; please use [http.request()][] instead.
+Constructs a new HTTP client. `port` and `host` refer to the server to be
+connected to.
## Class: http.Server
-This is an `EventEmitter` with the following events:
+This is an [EventEmitter][] with the following events:
### Event: 'request'
@@ -145,23 +145,38 @@ The actual length will be determined by your OS through sysctl settings such as
parameter is 511 (not 512).
This function is asynchronous. The last parameter `callback` will be added as
-a listener for the ['listening'](net.html#event_listening_) event.
-See also [net.Server.listen()](net.html#server.listen).
+a listener for the ['listening'][] event. See also [net.Server.listen(port)][].
### server.listen(path, [callback])
Start a UNIX socket server listening for connections on the given `path`.
+This function is asynchronous. The last parameter `callback` will be added as
+a listener for the ['listening'][] event. See also [net.Server.listen(path)][].
+
+
+### server.listen(handle, [listeningListener])
+
+* `handle` {Object}
+* `listeningListener` {Function}
+
+The `handle` object can be set to either a server or socket (anything
+with an underlying `_handle` member), or a `{fd: }` object.
+
+This will cause the server to accept connections on the specified
+handle, but it is presumed that the file descriptor or handle has
+already been bound to a port or domain socket.
+
+Listening on a file descriptor is not supported on Windows.
+
This function is asynchronous. The last parameter `callback` will be added as
a listener for the ['listening'](net.html#event_listening_) event.
See also [net.Server.listen()](net.html#server.listen).
-
### server.close([cb])
-Stops the server from accepting new connections.
-See [net.Server.close()](net.html#server.close).
+Stops the server from accepting new connections. See [net.Server.close()][].
### server.maxHeadersCount
@@ -175,8 +190,8 @@ no limit will be applied.
This object is created internally by a HTTP server -- not by
the user -- and passed as the first argument to a `'request'` listener.
-The request implements the [Readable Stream](stream.html#readable_stream)
-interface. This is an `EventEmitter` with the following events:
+The request implements the [Readable Stream][] interface. This is an
+[EventEmitter][] with the following events:
### Event: 'data'
@@ -184,7 +199,7 @@ interface. This is an `EventEmitter` with the following events:
Emitted when a piece of the message body is received. The chunk is a string if
an encoding has been set with `request.setEncoding()`, otherwise it's a
-[Buffer](buffer.html).
+[Buffer][].
Note that the __data will be lost__ if there is no listener when a
`ServerRequest` emits a `'data'` event.
@@ -266,9 +281,8 @@ Also `request.httpVersionMajor` is the first integer and
### request.setEncoding([encoding])
-Set the encoding for the request body. See
-[stream.setEncoding()](stream.html#stream_stream_setencoding_encoding)
-for more information.
+Set the encoding for the request body. See [stream.setEncoding()][] for more
+information.
### request.pause()
@@ -295,8 +309,8 @@ authentication details.
This object is created internally by a HTTP server--not by the user. It is
passed as the second parameter to the `'request'` event.
-The response implements the [Writable Stream](stream.html#writable_stream)
-interface. This is an `EventEmitter` with the following events:
+The response implements the [Writable Stream][] interface. This is an
+[EventEmitter][] with the following events:
### Event: 'close'
@@ -308,8 +322,7 @@ Indicates that the underlaying connection was terminated before
### response.writeContinue()
Sends a HTTP/1.1 100 Continue message to the client, indicating that
-the request body should be sent. See the [checkContinue](#event_checkContinue_) event on
-`Server`.
+the request body should be sent. See the ['checkContinue'][] event on `Server`.
### response.writeHead(statusCode, [reasonPhrase], [headers])
@@ -449,7 +462,7 @@ Node maintains several connections per server to make HTTP requests.
This function allows one to transparently issue requests.
`options` can be an object or a string. If `options` is a string, it is
-automatically parsed with [url.parse()](url.html#url.parse).
+automatically parsed with [url.parse()][].
Options:
@@ -465,10 +478,9 @@ Options:
- `headers`: An object containing request headers.
- `auth`: Basic authentication i.e. `'user:password'` to compute an
Authorization header.
-- `agent`: Controls [Agent](#http.Agent) behavior. When an Agent is used
- request will default to `Connection: keep-alive`. Possible values:
- - `undefined` (default): use [global Agent](#http.globalAgent) for this host
- and port.
+- `agent`: Controls [Agent][] behavior. When an Agent is used request will
+ default to `Connection: keep-alive`. Possible values:
+ - `undefined` (default): use [global Agent][] for this host and port.
- `Agent` object: explicitly use the passed in `Agent`.
- `false`: opts out of connection pooling with an Agent, defaults request to
`Connection: close`.
@@ -634,15 +646,15 @@ event, the entire body will be caught.
Note: Node does not check whether Content-Length and the length of the body
which has been transmitted are equal or not.
-The request implements the [Writable Stream](stream.html#writable_stream)
-interface. This is an `EventEmitter` with the following events:
+The request implements the [Writable Stream][] interface. This is an
+[EventEmitter][] with the following events:
### Event 'response'
`function (response) { }`
-Emitted when a response is received to this request. This event is emitted only once. The
-`response` argument will be an instance of `http.ClientResponse`.
+Emitted when a response is received to this request. This event is emitted only
+once. The `response` argument will be an instance of `http.ClientResponse`.
Options:
@@ -784,7 +796,7 @@ server--in that case it is suggested to use the
`['Transfer-Encoding', 'chunked']` header line when
creating the request.
-The `chunk` argument should be a [buffer](buffer.html) or a string.
+The `chunk` argument should be a [Buffer][] or a string.
The `encoding` argument is optional and only applies when `chunk` is a string.
Defaults to `'utf8'`.
@@ -805,29 +817,26 @@ Aborts a request. (New since v0.3.8.)
### request.setTimeout(timeout, [callback])
-Once a socket is assigned to this request and is connected
-[socket.setTimeout(timeout, [callback])](net.html#socket.setTimeout)
-will be called.
+Once a socket is assigned to this request and is connected
+[socket.setTimeout()][] will be called.
### request.setNoDelay([noDelay])
-Once a socket is assigned to this request and is connected
-[socket.setNoDelay(noDelay)](net.html#socket.setNoDelay)
-will be called.
+Once a socket is assigned to this request and is connected
+[socket.setNoDelay()][] will be called.
### request.setSocketKeepAlive([enable], [initialDelay])
-Once a socket is assigned to this request and is connected
-[socket.setKeepAlive(enable, [initialDelay])](net.html#socket.setKeepAlive)
-will be called.
+Once a socket is assigned to this request and is connected
+[socket.setKeepAlive()][] will be called.
## http.ClientResponse
This object is created when making a request with `http.request()`. It is
passed to the `'response'` event of the request object.
-The response implements the [Readable Stream](stream.html#readable_stream)
-interface. This is an `EventEmitter` with the following events:
+The response implements the [Readable Stream][] interface. This is an
+[EventEmitter][] with the following events:
### Event: 'data'
@@ -853,8 +862,7 @@ emitted no other events will be emitted on the response.
Indicates that the underlaying connection was terminated before
`end` event was emitted.
-See [http.ServerRequest](#http.ServerRequest)'s `'close'` event for more
-information.
+See [http.ServerRequest][]'s `'close'` event for more information.
### response.statusCode
@@ -877,9 +885,8 @@ The response trailers object. Only populated after the 'end' event.
### response.setEncoding([encoding])
-Set the encoding for the response body. See
-[stream.setEncoding()](stream.html#stream_stream_setencoding_encoding)
-for more information.
+Set the encoding for the response body. See [stream.setEncoding()][] for more
+information.
### response.pause()
@@ -888,3 +895,22 @@ Pauses response from emitting events. Useful to throttle back a download.
### response.resume()
Resumes a paused response.
+
+[Agent]: #http_class_http_agent
+['checkContinue']: #http_event_checkcontinue
+[Buffer]: buffer.html#buffer_buffer
+[EventEmitter]: events.html#events_class_events_eventemitter
+[global Agent]: #http_http_globalagent
+[http.request()]: #http_http_request_options_callback
+[http.ServerRequest]: #http_class_http_serverrequest
+['listening']: net.html#net_event_listening
+[net.Server.close()]: net.html#net_server_close_cb
+[net.Server.listen(path)]: net.html#net_server_listen_path_listeninglistener
+[net.Server.listen(port)]: net.html#net_server_listen_port_host_backlog_listeninglistener
+[Readable Stream]: stream.html#stream_readable_stream
+[socket.setKeepAlive()]: net.html#net_socket_setkeepalive_enable_initialdelay
+[socket.setNoDelay()]: net.html#net_socket_setnodelay_nodelay
+[socket.setTimeout()]: net.html#net_socket_settimeout_timeout_callback
+[stream.setEncoding()]: stream.html#stream_stream_setencoding_encoding
+[url.parse()]: url.html#url_url_parse_urlstr_parsequerystring_slashesdenotehost
+[Writable Stream]: stream.html#stream_writable_stream
diff --git a/doc/api/https.markdown b/doc/api/https.markdown
index 10c9a3e3ba9..5f0052c2c3a 100644
--- a/doc/api/https.markdown
+++ b/doc/api/https.markdown
@@ -13,8 +13,8 @@ This class is a subclass of `tls.Server` and emits events same as
## https.createServer(options, [requestListener])
Returns a new HTTPS web server object. The `options` is similar to
-[tls.createServer()](tls.html#tls.createServer). The `requestListener` is
-a function which is automatically added to the `'request'` event.
+[tls.createServer()][]. The `requestListener` is a function which is
+automatically added to the `'request'` event.
Example:
@@ -48,8 +48,8 @@ Or
## https.request(options, callback)
-Makes a request to a secure web server.
-All options from [http.request()](http.html#http.request) are valid.
+Makes a request to a secure web server. All options from [http.request()][]
+are valid.
Example:
@@ -93,16 +93,15 @@ The options argument has the following options
- `headers`: An object containing request headers.
- `auth`: Basic authentication i.e. `'user:password'` to compute an
Authorization header.
-- `agent`: Controls [Agent](#https.Agent) behavior. When an Agent is
- used request will default to `Connection: keep-alive`. Possible values:
- - `undefined` (default): use [globalAgent](#https.globalAgent) for this
- host and port.
+- `agent`: Controls [Agent][] behavior. When an Agent is used request will
+ default to `Connection: keep-alive`. Possible values:
+ - `undefined` (default): use [globalAgent][] for this host and port.
- `Agent` object: explicitly use the passed in `Agent`.
- `false`: opts out of connection pooling with an Agent, defaults request to
`Connection: close`.
-The following options from [tls.connect()](tls.html#tls.connect) can also be
-specified. However, a [globalAgent](#https.globalAgent) silently ignores these.
+The following options from [tls.connect()][] can also be specified. However, a
+[globalAgent][] silently ignores these.
- `pfx`: Certificate, Private key and CA certificates to use for SSL. Default `null`.
- `key`: Private key to use for SSL. Default `null`.
@@ -177,11 +176,18 @@ Example:
## Class: https.Agent
-An Agent object for HTTPS similar to [http.Agent](http.html#http.Agent).
-See [https.request()](#https.request) for more information.
+An Agent object for HTTPS similar to [http.Agent][]. See [https.request()][]
+for more information.
## https.globalAgent
-Global instance of [https.Agent](#https.Agent) which is used as the default
-for all HTTPS client requests.
+Global instance of [https.Agent][] for all HTTPS client requests.
+
+[Agent]: #https_class_https_agent
+[globalAgent]: #https_https_globalagent
+[http.Agent]: http.html#http_class_http_agent
+[http.request()]: http.html#http_http_request_options_callback
+[https.Agent]: #https_class_https_agent
+[tls.connect()]: tls.html#tls_tls_connect_options_secureconnectlistener
+[tls.createServer()]: tls.html#tls_tls_createserver_options_secureconnectionlistener
diff --git a/doc/api/modules.markdown b/doc/api/modules.markdown
index d90cf578739..4720aa5ece1 100644
--- a/doc/api/modules.markdown
+++ b/doc/api/modules.markdown
@@ -383,7 +383,7 @@ Additionally, node will search in the following locations:
* 3: `$PREFIX/lib/node`
Where `$HOME` is the user's home directory, and `$PREFIX` is node's
-configured `installPrefix`.
+configured `node_prefix`.
These are mostly for historic reasons. You are highly encouraged to
place your dependencies locally in `node_modules` folders. They will be
diff --git a/doc/api/net.markdown b/doc/api/net.markdown
index 7fb07df9558..6d4e2382ae0 100644
--- a/doc/api/net.markdown
+++ b/doc/api/net.markdown
@@ -9,8 +9,7 @@ this module with `require('net');`
## net.createServer([options], [connectionListener])
Creates a new TCP server. The `connectionListener` argument is
-automatically set as a listener for the ['connection'](#event_connection_)
-event.
+automatically set as a listener for the ['connection'][] event.
`options` is an object with the following defaults:
@@ -20,7 +19,7 @@ event.
If `allowHalfOpen` is `true`, then the socket won't automatically send a FIN
packet when the other end of the socket sends a FIN packet. The socket becomes
non-readable, but still writable. You should call the `end()` method explicitly.
-See ['end'](#event_end_) event for more information.
+See ['end'][] event for more information.
Here is an example of a echo server which listens for connections
on port 8124:
@@ -55,8 +54,7 @@ Use `nc` to connect to a UNIX domain socket server:
## net.createConnection(options, [connectionListener])
Constructs a new socket object and opens the socket to the given location.
-When the socket is established, the ['connect'](#event_connect_) event will be
-emitted.
+When the socket is established, the ['connect'][] event will be emitted.
For TCP sockets, `options` argument should be an object which specifies:
@@ -74,11 +72,10 @@ Common options are:
- `allowHalfOpen`: if `true`, the socket won't automatically send
a FIN packet when the other end of the socket sends a FIN packet.
- Defaults to `false`.
- See ['end'](#event_end_) event for more information.
+ Defaults to `false`. See ['end'][] event for more information.
The `connectListener` parameter will be added as an listener for the
-['connect'](#event_connect_) event.
+['connect'][] event.
Here is an example of a client of echo server as described previously:
@@ -107,14 +104,14 @@ changed to
Creates a TCP connection to `port` on `host`. If `host` is omitted,
`'localhost'` will be assumed.
The `connectListener` parameter will be added as an listener for the
-['connect'](#event_connect_) event.
+['connect'][] event.
## net.connect(path, [connectListener])
## net.createConnection(path, [connectListener])
Creates unix socket connection to `path`.
The `connectListener` parameter will be added as an listener for the
-['connect'](#event_connect_) event.
+['connect'][] event.
## Class: net.Server
@@ -133,9 +130,8 @@ The actual length will be determined by your OS through sysctl settings such as
parameter is 511 (not 512).
This function is asynchronous. When the server has been bound,
-['listening'](#event_listening_) event will be emitted.
-the last parameter `listeningListener` will be added as an listener for the
-['listening'](#event_listening_) event.
+['listening'][] event will be emitted. The last parameter `listeningListener`
+will be added as an listener for the ['listening'][] event.
One issue some users run into is getting `EADDRINUSE` errors. This means that
another server is already running on the requested port. One way of handling this
@@ -158,6 +154,24 @@ would be to wait a second and then try again. This can be done with
Start a UNIX socket server listening for connections on the given `path`.
+This function is asynchronous. When the server has been bound,
+['listening'][] event will be emitted. The last parameter `listeningListener`
+will be added as an listener for the ['listening'][] event.
+
+### server.listen(handle, [listeningListener])
+
+* `handle` {Object}
+* `listeningListener` {Function}
+
+The `handle` object can be set to either a server or socket (anything
+with an underlying `_handle` member), or a `{fd: }` object.
+
+This will cause the server to accept connections on the specified
+handle, but it is presumed that the file descriptor or handle has
+already been bound to a port or domain socket.
+
+Listening on a file descriptor is not supported on Windows.
+
This function is asynchronous. When the server has been bound,
['listening'](#event_listening_) event will be emitted.
the last parameter `listeningListener` will be added as an listener for the
@@ -207,7 +221,7 @@ The number of concurrent connections on the server.
This becomes `null` when sending a socket to a child with `child_process.fork()`.
-`net.Server` is an `EventEmitter` with the following events:
+`net.Server` is an [EventEmitter][] with the following events:
### Event: 'listening'
@@ -266,13 +280,12 @@ Normally this method is not needed, as `net.createConnection` opens the
socket. Use this only if you are implementing a custom Socket or if a
Socket is closed and you want to reuse it to connect to another server.
-This function is asynchronous. When the ['connect'](#event_connect_) event is
-emitted the socket is established. If there is a problem connecting, the
-`'connect'` event will not be emitted, the `'error'` event will be emitted with
-the exception.
+This function is asynchronous. When the ['connect'][] event is emitted the
+socket is established. If there is a problem connecting, the `'connect'` event
+will not be emitted, the `'error'` event will be emitted with the exception.
The `connectListener` parameter will be added as an listener for the
-['connect'](#event_connect_) event.
+['connect'][] event.
### socket.bufferSize
@@ -297,8 +310,7 @@ Users who experience large or growing `bufferSize` should attempt to
### socket.setEncoding([encoding])
Set the encoding for the socket as a Readable Stream. See
-[stream.setEncoding()](stream.html#stream_stream_setencoding_encoding)
-for more information.
+[stream.setEncoding()][] for more information.
### socket.write(data, [encoding], [callback])
@@ -392,7 +404,7 @@ The amount of received bytes.
The amount of bytes sent.
-`net.Socket` instances are EventEmitters with the following events:
+`net.Socket` instances are [EventEmitter][] with the following events:
### Event: 'connect'
@@ -405,8 +417,7 @@ See `connect()`.
Emitted when data is received. The argument `data` will be a `Buffer` or
`String`. Encoding of data is set by `socket.setEncoding()`.
-(See the [Readable Stream](stream.html#readable_stream) section for more
-information.)
+(See the [Readable Stream][] section for more information.)
Note that the __data will be lost__ if there is no listener when a `Socket`
emits a `'data'` event.
@@ -465,3 +476,10 @@ Returns true if input is a version 4 IP address, otherwise returns false.
Returns true if input is a version 6 IP address, otherwise returns false.
+['connect']: #net_event_connect
+['connection']: #net_event_connection
+['end']: #net_event_end
+[EventEmitter]: events.html#events_class_events_eventemitter
+['listening']: #net_event_listening
+[Readable Stream]: stream.html#stream_readable_stream
+[stream.setEncoding()]: stream.html#stream_stream_setencoding_encoding
diff --git a/doc/api/os.markdown b/doc/api/os.markdown
index 33eb9b6317d..b5fffcf1cb4 100644
--- a/doc/api/os.markdown
+++ b/doc/api/os.markdown
@@ -6,6 +6,10 @@ Provides a few basic operating-system related utility functions.
Use `require('os')` to access this module.
+## os.tmpDir()
+
+Returns the operating system's default directory for temp files.
+
## os.hostname()
Returns the hostname of the operating system.
diff --git a/doc/api/process.markdown b/doc/api/process.markdown
index 21eda7d551f..35b104dfabd 100644
--- a/doc/api/process.markdown
+++ b/doc/api/process.markdown
@@ -3,7 +3,7 @@
The `process` object is a global object and can be accessed from anywhere.
-It is an instance of `EventEmitter`.
+It is an instance of [EventEmitter][].
## Event: 'exit'
@@ -295,18 +295,11 @@ An example of the possible output looks like:
node_shared_zlib: 'false',
node_use_dtrace: 'false',
node_use_openssl: 'true',
- node_use_system_openssl: 'false',
+ node_shared_openssl: 'false',
strict_aliasing: 'true',
target_arch: 'x64',
v8_use_snapshot: 'true' } }
-## process.installPrefix
-
-A compiled-in property that exposes `NODE_PREFIX`.
-
- console.log('Prefix: ' + process.installPrefix);
-
-
## process.kill(pid, [signal])
Send a signal to a process. `pid` is the process id and `signal` is the
@@ -425,3 +418,5 @@ a diff reading, useful for benchmarks and measuring intervals:
console.log('benchmark took %d seconds and %d nanoseconds', t[0], t[1]);
// benchmark took 1 seconds and 6962306 nanoseconds
}, 1000);
+
+[EventEmitter]: events.html#events_class_events_eventemitter
diff --git a/doc/api/stdio.markdown b/doc/api/stdio.markdown
index a70dcefa160..0da30a0add0 100644
--- a/doc/api/stdio.markdown
+++ b/doc/api/stdio.markdown
@@ -17,8 +17,7 @@ Prints to stdout with newline. This function can take multiple arguments in a
console.log('count: %d', count);
If formatting elements are not found in the first string then `util.inspect`
-is used on each argument.
-See [util.format()](util.html#util.format) for more information.
+is used on each argument. See [util.format()][] for more information.
## console.info([data], [...])
@@ -56,6 +55,8 @@ Print a stack trace to stderr of the current position.
## console.assert(expression, [message])
-Same as [assert.ok()](assert.html#assert_assert_value_message_assert_ok_value_message)
-where if the `expression` evaluates as `false` throw an AssertionError with `message`.
+Same as [assert.ok()][] where if the `expression` evaluates as `false` throw an
+AssertionError with `message`.
+[assert.ok()]: assert.html#assert_assert_value_message_assert_ok_value_message
+[util.format()]: util.html#util_util_format_format
diff --git a/doc/api/stream.markdown b/doc/api/stream.markdown
index 72d9c9c95a1..66160df63a0 100644
--- a/doc/api/stream.markdown
+++ b/doc/api/stream.markdown
@@ -4,7 +4,7 @@
A stream is an abstract interface implemented by various objects in Node.
For example a request to an HTTP server is a stream, as is stdout. Streams
-are readable, writable, or both. All streams are instances of `EventEmitter`.
+are readable, writable, or both. All streams are instances of [EventEmitter][]
You can load up the Stream base class by doing `require('stream')`.
@@ -182,3 +182,5 @@ Any queued write data will not be sent.
After the write queue is drained, close the file descriptor. `destroySoon()`
can still destroy straight away, as long as there is no data left in the queue
for writes.
+
+[EventEmitter]: events.html#events_class_events_eventemitter
diff --git a/doc/api/tls.markdown b/doc/api/tls.markdown
index f5283c3f954..279a672faa1 100644
--- a/doc/api/tls.markdown
+++ b/doc/api/tls.markdown
@@ -49,8 +49,8 @@ server-side resources, which makes it a potential vector for denial-of-service
attacks.
To mitigate this, renegotiations are limited to three times every 10 minutes. An
-error is emitted on the [CleartextStream](#tls.CleartextStream) instance when
-the threshold is exceeded. The limits are configurable:
+error is emitted on the [CleartextStream][] instance when the threshold is
+exceeded. The limits are configurable:
- `tls.CLIENT_RENEG_LIMIT`: renegotiation limit, default is 3.
@@ -78,10 +78,9 @@ handshake extensions allowing you:
## tls.createServer(options, [secureConnectionListener])
-Creates a new [tls.Server](#tls.Server).
-The `connectionListener` argument is automatically set as a listener for the
-[secureConnection](#event_secureConnection_) event.
-The `options` object has these possibilities:
+Creates a new [tls.Server][]. The `connectionListener` argument is
+automatically set as a listener for the [secureConnection][] event. The
+`options` object has these possibilities:
- `pfx`: A string or `Buffer` containing the private key, certificate and
CA certs of the server in PFX or PKCS12 format. (Mutually exclusive with
@@ -241,9 +240,9 @@ Creates a new client connection to the given `port` and `host` (old API) or
- `servername`: Servername for SNI (Server Name Indication) TLS extension.
The `secureConnectListener` parameter will be added as a listener for the
-['secureConnect'](#event_secureConnect_) event.
+['secureConnect'][] event.
-`tls.connect()` returns a [CleartextStream](#tls.CleartextStream) object.
+`tls.connect()` returns a [CleartextStream][] object.
Here is an example of a client of echo server as described previously:
@@ -315,8 +314,8 @@ and the cleartext one is used as a replacement for the initial encrypted stream.
automatically reject clients with invalid certificates. Only applies to
servers with `requestCert` enabled.
-`tls.createSecurePair()` returns a SecurePair object with
-[cleartext](#tls.CleartextStream) and `encrypted` stream properties.
+`tls.createSecurePair()` returns a SecurePair object with [cleartext][] and
+`encrypted` stream properties.
## Class: SecurePair
@@ -342,9 +341,8 @@ connections using TLS or SSL.
`function (cleartextStream) {}`
This event is emitted after a new connection has been successfully
-handshaked. The argument is a instance of
-[CleartextStream](#tls.CleartextStream). It has all the common stream methods
-and events.
+handshaked. The argument is a instance of [CleartextStream][]. It has all the
+common stream methods and events.
`cleartextStream.authorized` is a boolean value which indicates if the
client has verified by one of the supplied certificate authorities for the
@@ -386,8 +384,8 @@ event.
### server.address()
Returns the bound address, the address family name and port of the
-server as reported by the operating system.
-See [net.Server.address()](net.html#server.address) for more information.
+server as reported by the operating system. See [net.Server.address()][] for
+more information.
### server.addContext(hostname, credentials)
@@ -410,8 +408,8 @@ The number of concurrent connections on the server.
This is a stream on top of the *Encrypted* stream that makes it possible to
read/write an encrypted data as a cleartext data.
-This instance implements a duplex [Stream](stream.html) interfaces.
-It has all the common stream methods and events.
+This instance implements a duplex [Stream][] interfaces. It has all the
+common stream methods and events.
A ClearTextStream is the `clear` member of a SecurePair object.
@@ -489,3 +487,10 @@ The string representation of the remote IP address. For example,
### cleartextStream.remotePort
The numeric representation of the remote port. For example, `443`.
+
+[CleartextStream]: #tls_class_tls_cleartextstream
+[net.Server.address()]: net.html#net_server_address
+['secureConnect']: #tls_event_secureconnect
+[secureConnection]: #tls_event_secureconnection
+[Stream]: stream.html#stream_stream
+[tls.Server]: #tls_class_tls_server
diff --git a/doc/api/zlib.markdown b/doc/api/zlib.markdown
index a4c3a93ef34..66c92846029 100644
--- a/doc/api/zlib.markdown
+++ b/doc/api/zlib.markdown
@@ -103,15 +103,6 @@ tradeoffs involved in zlib usage.
}
}).listen(1337);
-## Constants
-
-
-
-All of the constants defined in zlib.h are also defined on
-`require('zlib')`. They are described in more detail in the zlib
-documentation. See
-for more details.
-
## zlib.createGzip([options])
Returns a new [Gzip](#zlib_class_zlib_gzip) object with an
@@ -232,8 +223,8 @@ relevant when compressing, and are ignored by the decompression classes.
* strategy (compression only)
* dictionary (deflate/inflate only, empty dictionary by default)
-See the description of `deflateInit2` and `inflateInit2` at
- for more information on these.
+See the description of `deflateInit2` and `inflateInit2`
+at for more information on these.
## Memory Usage Tuning
@@ -274,3 +265,69 @@ In general, greater memory usage options will mean that node has to make
fewer calls to zlib, since it'll be able to process more data in a
single `write` operation. So, this is another factor that affects the
speed, at the cost of memory usage.
+
+## Constants
+
+
+
+All of the constants defined in zlib.h are also defined on
+`require('zlib')`.
+In the normal course of operations, you will not need to ever set any of
+these. They are documented here so that their presence is not
+surprising. This section is taken almost directly from the [zlib
+documentation](http://zlib.net/manual.html#Constants). See
+ for more details.
+
+Allowed flush values.
+
+* `zlib.Z_NO_FLUSH`
+* `zlib.Z_PARTIAL_FLUSH`
+* `zlib.Z_SYNC_FLUSH`
+* `zlib.Z_FULL_FLUSH`
+* `zlib.Z_FINISH`
+* `zlib.Z_BLOCK`
+* `zlib.Z_TREES`
+
+Return codes for the compression/decompression functions. Negative
+values are errors, positive values are used for special but normal
+events.
+
+* `zlib.Z_OK`
+* `zlib.Z_STREAM_END`
+* `zlib.Z_NEED_DICT`
+* `zlib.Z_ERRNO`
+* `zlib.Z_STREAM_ERROR`
+* `zlib.Z_DATA_ERROR`
+* `zlib.Z_MEM_ERROR`
+* `zlib.Z_BUF_ERROR`
+* `zlib.Z_VERSION_ERROR`
+
+Compression levels.
+
+* `zlib.Z_NO_COMPRESSION`
+* `zlib.Z_BEST_SPEED`
+* `zlib.Z_BEST_COMPRESSION`
+* `zlib.Z_DEFAULT_COMPRESSION`
+
+Compression strategy.
+
+* `zlib.Z_FILTERED`
+* `zlib.Z_HUFFMAN_ONLY`
+* `zlib.Z_RLE`
+* `zlib.Z_FIXED`
+* `zlib.Z_DEFAULT_STRATEGY`
+
+Possible values of the data_type field.
+
+* `zlib.Z_BINARY`
+* `zlib.Z_TEXT`
+* `zlib.Z_ASCII`
+* `zlib.Z_UNKNOWN`
+
+The deflate compression method (the only one supported in this version).
+
+* `zlib.Z_DEFLATED`
+
+For initializing zalloc, zfree, opaque.
+
+* `zlib.Z_NULL`
diff --git a/doc/blog.html b/doc/blog.html
new file mode 100644
index 00000000000..6d55b8fe9cc
--- /dev/null
+++ b/doc/blog.html
@@ -0,0 +1,240 @@
+
+
+
+
+
+
+
+
+
+ <%= title || "Node.js Blog" %>
+
+
+
+
+
+ <%
+ }
+ }
+ } else { // not single post page
+ if (paginated && total > 1 ) {
+ if (page > 0) {
+ // add 1 to all of the displayed numbers, because
+ // humans are not zero-indexed like they ought to be.
+ %>
+
+ <%
+ });
+
+ if (paginated && total > 1 ) {
+ if (page > 0) {
+ // add 1 to all of the displayed numbers, because
+ // humans are not zero-indexed like they ought to be.
+ %>
+
+ <%
+ }
+ } // pagination
+ } // not a single post
+ %>
+
+
+
+
+
+
+
+
+
+
+
diff --git a/doc/blog/README.md b/doc/blog/README.md
new file mode 100644
index 00000000000..7d37706470a
--- /dev/null
+++ b/doc/blog/README.md
@@ -0,0 +1,28 @@
+title: README.md
+status: private
+
+# How This Blog Works
+
+Each `.md` file in this folder structure is a blog post. It has a
+few headers and a markdown body. (HTML is allowed in the body as well.)
+
+The relevant headers are:
+
+1. title
+2. author
+3. status: Only posts with a status of "publish" are published.
+4. category: The "release" category is treated a bit specially.
+5. version: Only relevant for "release" category.
+6. date
+7. slug: The bit that goes on the url. Must be unique, will be
+ generated from the title and date if missing.
+
+Posts in the "release" category are only shown in the main lists when
+they are the most recent release for that version family. The stable
+branch supercedes its unstable counterpart, so the presence of a `0.8.2`
+release notice will cause `0.7.10` to be hidden, but `0.6.19` would
+be unaffected.
+
+The folder structure in the blog source does not matter. Organize files
+here however makes sense. The metadata will be sorted out in the build
+later.
diff --git a/doc/blog/Uncategorized/an-easy-way-to-build-scalable-network-programs.md b/doc/blog/Uncategorized/an-easy-way-to-build-scalable-network-programs.md
new file mode 100644
index 00000000000..e1a509ecd78
--- /dev/null
+++ b/doc/blog/Uncategorized/an-easy-way-to-build-scalable-network-programs.md
@@ -0,0 +1,16 @@
+title: An Easy Way to Build Scalable Network Programs
+author: ryandahl
+date: Tue Oct 04 2011 15:39:56 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: an-easy-way-to-build-scalable-network-programs
+
+Suppose you're writing a web server which does video encoding on each file upload. Video encoding is very much compute bound. Some recent blog posts suggest that Node.js would fail miserably at this.
+
+Using Node does not mean that you have to write a video encoding algorithm in JavaScript (a language without even 64 bit integers) and crunch away in the main server event loop. The suggested approach is to separate the I/O bound task of receiving uploads and serving downloads from the compute bound task of video encoding. In the case of video encoding this is accomplished by forking out to ffmpeg. Node provides advanced means of asynchronously controlling subprocesses for work like this.
+
+It has also been suggested that Node does not take advantage of multicore machines. Node has long supported load-balancing connections over multiple processes in just a few lines of code - in this way a Node server will use the available cores. In coming releases we'll make it even easier: just pass --balance on the command line and Node will manage the cluster of processes.
+
+Node has a clear purpose: provide an easy way to build scalable network programs. It is not a tool for every problem. Do not write a ray tracer with Node. Do not write a web browser with Node. Do however reach for Node if tasked with writing a DNS server, DHCP server, or even a video encoding server.
+
+By relying on the kernel to schedule and preempt computationally expensive tasks and to load balance incoming connections, Node appears less magical than server platforms that employ userland scheduling. So far, our focus on simplicity and transparency has paid off: thenumberofsuccessstories from developers and corporations who are adopting the technology continues to grow.
diff --git a/doc/blog/Uncategorized/development-environment.md b/doc/blog/Uncategorized/development-environment.md
new file mode 100644
index 00000000000..f7141815890
--- /dev/null
+++ b/doc/blog/Uncategorized/development-environment.md
@@ -0,0 +1,25 @@
+title: Development Environment
+author: ryandahl
+date: Mon Apr 04 2011 20:16:27 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: development-environment
+
+If you're compiling a software package because you need a particular version (e.g. the latest), then it requires a little bit more maintenance than using a package manager like dpkg. Software that you compile yourself should *not* go into /usr, it should go into your home directory. This is part of being a software developer.
+
+One way of doing this is to install everything into $HOME/local/$PACKAGE. Here is how I install node on my machine:
./configure --prefix=$HOME/local/node-v0.4.5 && make install
+
+To have my paths automatically set I put this inside my $HOME/.zshrc:
+
+Node is under sufficiently rapid development that everyone should be compiling it themselves. A corollary of this is that npm (which should be installed alongside Node) does not require root to install packages.
+
+CPAN and RubyGems have blurred the lines between development tools and system package managers. With npm we wish to draw a clear line: it is not a system package manager. It is not for installing firefox or ffmpeg or OpenSSL; it is for rapidly downloading, building, and setting up Node packages. npm is a development tool. When a program written in Node becomes sufficiently mature it should be distributed as a tarball, .deb, .rpm, or other package system. It should not be distributed to end users with npm.
diff --git a/doc/blog/Uncategorized/evolving-the-node-js-brand.md b/doc/blog/Uncategorized/evolving-the-node-js-brand.md
new file mode 100644
index 00000000000..dbb7f852f15
--- /dev/null
+++ b/doc/blog/Uncategorized/evolving-the-node-js-brand.md
@@ -0,0 +1,34 @@
+title: Evolving the Node.js Brand
+author: Emily Tanaka-Delgado
+date: Mon Jul 11 2011 12:02:45 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: evolving-the-node-js-brand
+
+To echo Node’s evolutionary nature, we have refreshed the identity to help mark an exciting time for developers, businesses and users who benefit from the pioneering technology.
+
+Building a brand
+
+We began exploring elements to express Node.js and jettisoned preconceived notions about what we thought Node should look like, and focused on what Node is: kinetic,connected, scalable, modular, mechanical and organic. Working with designer Chris Glass, our explorations emphasized Node's dynamism and formed a visual language based on structure, relationships and interconnectedness.
+
+
+
+Inspired by process visualization, we discovered pattern, form, and by relief, the hex shape. The angled infrastructure encourages energy to move through the letterforms.
+
+
+
+This language can expand into the organic network topography of Node or distill down into a single hex connection point.
+
+This scaling represents the dynamic nature of Node in a simple, distinct manner.
+
+
+
+We look forward to exploring this visual language as the technology charges into a very promising future.
+
+
+
+We hope you'll have fun using it.
+
+To download the new logo, visit nodejs.org/logos.
+
+
diff --git a/doc/blog/Uncategorized/growing-up.md b/doc/blog/Uncategorized/growing-up.md
new file mode 100644
index 00000000000..6faff59cec7
--- /dev/null
+++ b/doc/blog/Uncategorized/growing-up.md
@@ -0,0 +1,12 @@
+title: Growing up
+author: ryandahl
+date: Thu Dec 15 2011 11:59:15 GMT-0800 (PST)
+status: publish
+category: Uncategorized
+slug: growing-up
+
+This week Microsoft announced support for Node in Windows Azure, their cloud computing platform. For the Node core team and the community, this is an important milestone. We've worked hard over the past six months reworking Node's machinery to support IO completion ports and Visual Studio to provide a good native port to Windows. The overarching goal of the port was to expand our user base to the largest number of developers. Happily, this has paid off in the form of being a first class citizen on Azure. Many users who would have never used Node as a pure unix tool are now up and running on the Windows platform. More users translates into a deeper and better ecosystem of modules, which makes for a better experience for everyone.
+
+We also redesigned our website - something that we've put off for a long time because we felt that Node was too nascent to dedicate marketing to it. But now that we have binary distributions for Macintosh and Windows, have bundled npm, and are serving millions of users at various companies, we felt ready to indulge in a new website and share of a few of our success stories on the home page.
+
+Work is on-going. We continue to improve the software, making performance improvements and adding isolate support, but Node is growing up.
diff --git a/doc/blog/Uncategorized/jobs-nodejs-org.md b/doc/blog/Uncategorized/jobs-nodejs-org.md
new file mode 100644
index 00000000000..bf8278b816c
--- /dev/null
+++ b/doc/blog/Uncategorized/jobs-nodejs-org.md
@@ -0,0 +1,14 @@
+title: jobs.nodejs.org
+author: ryandahl
+date: Thu Mar 24 2011 23:05:22 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: jobs-nodejs-org
+
+We are starting an official jobs board for Node. There are two goals for this
+
+1. Promote the small emerging economy around this platform by having a central space for employers to find Node programmers.
+
+2. Make some money. We work hard to build this platform and taking a small tax for job posts seems a like reasonable "tip jar".
+
+jobs.nodejs.org
diff --git a/doc/blog/Uncategorized/ldapjs-a-reprise-of-ldap.md b/doc/blog/Uncategorized/ldapjs-a-reprise-of-ldap.md
new file mode 100644
index 00000000000..57d4af7fe13
--- /dev/null
+++ b/doc/blog/Uncategorized/ldapjs-a-reprise-of-ldap.md
@@ -0,0 +1,84 @@
+title: ldapjs: A reprise of LDAP
+author: mcavage
+date: Thu Sep 08 2011 14:25:43 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: ldapjs-a-reprise-of-ldap
+
+This post has been about 10 years in the making. My first job out of college was at IBM working on the Tivoli Directory Server, and at the time I had a preconceived notion that working on anything related to Internet RFCs was about as hot as you could get. I spent a lot of time back then getting "down and dirty" with everything about LDAP: the protocol, performance, storage engines, indexing and querying, caching, customer use cases and patterns, general network server patterns, etc. Basically, I soaked up as much as I possibly could while I was there. On top of that, I listened to all the "gray beards" tell me about the history of LDAP, which was a bizarre marriage of telecommunications conglomerates and graduate students. The point of this blog post is to give you a crash course in LDAP, and explain what makes ldapjs different. Allow me to be the gray beard for a bit...
+
What is LDAP and where did it come from?
+
+Directory services were largely pioneered by the telecommunications companies (e.g., AT&T) to allow fast information retrieval of all the crap you'd expect would be in a telephone book and directory. That is, given a name, or an address, or an area code, or a number, or a foo support looking up customer records, billing information, routing information, etc. The efforts of several telcos came to exist in the X.500 standard(s). An X.500 directory is one of the most complicated beasts you can possibly imagine, but on a high note, there's
+probably not a thing you can imagine in a directory service that wasn't thought of in there. It is literally the kitchen sink. Oh, and it doesn't run over IP (it's actually on the OSI model).
+
+Several years after X.500 had been deployed (at telcos, academic institutions, etc.), it became clear that the Internet was "for real." LDAP, the "Lightweight Directory Access Protocol," was invented to act purely as an IP-accessible gateway to an X.500 directory.
+
+At some point in the early 90's, a graduate student at the University of Michigan (with some help) cooked up the "grandfather" implementation of the LDAP protocol, which wasn't actually a "gateway," but rather a stand-alone implementation of LDAP. Said implementation, like many things at the time, was a process-per-connection concurrency model, and had "backends" (aka storage engine) for the file system and the Unix DB API. At some point the Berkeley Database (BDB) was put in, and still remains the de facto storage engine for most LDAP directories.
+
+Ok, so some a graduate student at UM wrote an LDAP server that wasn't a gateway. So what? Well, that UM code base turns out to be the thing that pretty much every vendor did a source license for. Those graduate students went off to Netscape later in the 90's, and largely dominated the market of LDAP middleware until Active Directory came along many years later (as far as I know, Active Directory is "from scratch", since while it's "almost" LDAP, it's different in a lot of ways). That Netscape code base was further bought and sold over the years to iPlanet, Sun Microsystems, and Red Hat (I'm probably missing somebody in that chain). It now lives in the Fedora umbrella as '389 Directory Server.' Probably the most popular fork of that code base now is OpenLDAP.
+
+IBM did the same thing, and the Directory Server I worked on was a fork of the UM code too, but it heavily diverged from the Netscape branches. The divergence was primarily due to: (1) backing to DB2 as opposed to BDB, and (2) needing to run on IBM's big iron like OS/400 and Z series mainframes.
+
+Macro point is that there have actually been very few "fresh" implementations of LDAP, and it gets a pretty bad reputation because at the end of the day you've got 20 years of "bolt-ons" to grad student code. Oh, and it was born out of ginormous telcos, so of course the protocol is overly complex.
+
+That said, while there certainly is some wacky stuff in the LDAP protocol itself, it really suffered from poor and buggy implementations more than the fact that LDAP itself was fundamentally flawed. As engine yard pointed out a few years back, you can think of LDAP as the original NoSQL store.
+
LDAP: The Good Parts
+
+So what's awesome about LDAP? Since it's a directory system it maintains a hierarchy of your data, which as an information management pattern aligns
+with _a lot_ of use case (the quintessential example is white pages for people in your company, but subscriptions to SaaS applications, "host groups"
+for tracking machines/instances, physical goods tracking, etc., all have use cases that fit that organization scheme). For example, presumably at your job
+you have a "reporting chain." Let's say a given record in LDAP (I'll use myself as a guinea pig here) looks like:
+
+The record for me would live under the tree of engineers I report to (and as an example some other popular engineers under said vice president) would look like:
+
+Ok, so we've got a tree. It's not tremendously different from your filesystem, but how do we find people? LDAP has a rich search filter syntax that makes a lot of sense for key/value data (far more than tacking Map Reduce jobs on does, imo), and all search queries take a "start point" in the tree. Here's an example: let's say I wanted to find all "Software Engineers" in the entire company, a filter would look like:
+
(title="Software Engineer")
+And I'd just start my search from 'uid=david' in the example above. Let's say I wanted to find all software engineers who worked in Seattle:
+
(&(title="Software Engineer")(city=Seattle))
+I could keep going, but the gist is that LDAP has "full" boolean predicate logic, wildcard filters, etc. It's really rich.
+
+Oh, and on top of the technical merits, better or worse, it's an established standard for both administrators and applications (i.e., most "shipped" intranet software has either a local user repository or the ability to leverage an LDAP server somewhere). So there's a lot of compelling reasons to look at leveraging LDAP.
+
ldapjs: Why do I care?
+
+As I said earlier, I spent a lot of time at IBM observing how customers used LDAP, and the real items I took away from that experience were:
+
+
LDAP implementations have suffered a lot from never having been designed from the ground up for a large number of concurrent connections with asynchronous operations.
+
There are use cases for LDAP that just don't always fit the traditional "here's my server and storage engine" model. A lot of simple customer use cases wanted an LDAP access point, but not be forced into taking the heavy backends that came with it (they wanted the original gateway model!). There was an entire "sub" industry for this known as "meta directories" back in the late 90's and early 2000's.
+
Replication was always a sticking point. LDAP vendors all tried to offer a big multi-master, multi-site replication model. It was a lot of "bolt-on" complexity, done before the CAP theorem was written, and certainly before it was accepted as "truth."
+
Nobody uses all of the protocol. In fact, 20% of the features solve 80% of the use cases (I'm making that number up, but you get the idea).
+
+
+For all the good parts of LDAP, those are really damned big failing points, and even I eventually abandoned LDAP for the greener pastures of NoSQL somewhere
+along the way. But it always nagged at me that LDAP didn't get it's due because of a lot of implementation problems (to be clear, if I could, I'd change some
+aspects of the protocol itself too, but that's a lot harder).
+
+Well, in the last year, I went to work for Joyent, and like everyone else, we have several use problems that are classic directory service problems. If you break down the list I outlined above:
+
+
Connection-oriented and asynchronous: Holy smokes batman, node.js is a completely kick-ass event-driven asynchronous server platform that manages connections like a boss. Check!
+
Lots of use cases: Yeah, we've got some. Man, the sinatra/express paradigm is so easy to slap over anything. How about we just do that and leave as many use cases open as we can. Check!
+
Replication is hard. CAP is right: There are a lot of distributed databases out vying to solve exactly this problem. At Joyent we went with Riak. Check!
+
Don't need all of the protocol: I'm lazy. Let's just skip the stupid things most people don't need. Check!
+
+
+So that's the crux of ldapjs right there. Giving you the ability to put LDAP back into your application while nailing those 4 fundamental problems that plague most existing LDAP deployments.
+
+The obvious question is how it turned out, and the answer is, honestly, better than I thought it would. When I set out to do this, I actually assumed I'd be shipping a much smaller percentage of the RFC than is there. There's actually about 95% of the core RFC implemented. I wasn't sure if the marriage of this protocol to node/JavaScript would work out, but if you've used express ever, this should be _really_ familiar. And I tried to make it as natural as possible to use "pure" JavaScript objects, rather than requiring the developer to understand ASN.1 (the binary wire protocol) or the LDAP RFC in detail (this one mostly worked out; ldap_modify is still kind of a PITA).
+
+Within 24 hours of releasing ldapjs on Twitter, there was an implementation of an address book that works with Thunderbird/Evolution, by the end of that weekend there was some slick integration with CouchDB, and ldapjs even got used in one of the node knockout apps. Off to a pretty good start!
+
+
The Road Ahead
+
+Hopefully you've been motivated to learn a little bit more about LDAP and try out ldapjs. The best place to start is probably the guide. After that you'll probably need to pick up a book from back in the day. ldapjs itself is still in its infancy; there's quite a bit of room to add some slick client-side logic (e.g., connection pools, automatic reconnects), easy to use schema validation, backends, etc. By the time this post is live, there will be experimental dtrace support if you're running on Mac OS X or preferably Joyent's SmartOS (shameless plug). And that nagging percentage of the protocol I didn't do will get filled in over time I suspect. If you've got an interest in any of this, send me some pull requests, but most importantly, I just want to see LDAP not just be a skeleton in the closet and get used in places where you should be using it. So get out there and write you some LDAP.
diff --git a/doc/blog/Uncategorized/libuv-status-report.md b/doc/blog/Uncategorized/libuv-status-report.md
new file mode 100644
index 00000000000..68637a43b92
--- /dev/null
+++ b/doc/blog/Uncategorized/libuv-status-report.md
@@ -0,0 +1,45 @@
+title: libuv status report
+author: ryandahl
+date: Fri Sep 23 2011 12:45:50 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: libuv-status-report
+
+We announced back in July that with Microsoft's support Joyent would be porting Node to Windows. This effort is ongoing but I thought it would be nice to make a status report post about the new platform library libuv which has resulted from porting Node to Windows.
+
+libuv's purpose is to abstract platform-dependent code in Node into one place where it can be tested for correctness and performance before bindings to V8 are added. Since Node is totally non-blocking, libuv turns out to be a rather useful library itself: a BSD-licensed, minimal, high-performance, cross-platform networking library.
+
+We attempt to not reinvent the wheel where possible. The entire Unix backend sits heavily on Marc Lehmann's beautiful libraries libev and libeio. For DNS we integrated with Daniel Stenberg's C-Ares. For cross-platform build-system support we're relying on Chrome's GYP meta-build system.
+
+The current implmented features are:
+
File system events (Currently supports inotify, ReadDirectoryChangesW and will support kqueue and event ports in the near future.) uv_fs_event_t
+
VT100 TTY uv_tty_t
+
Socket sharing between processes uv_ipc_t (planned API)
+
+For complete documentation see the header file: include/uv.h. There are a number of tests in the test directory which demonstrate the API.
+
+libuv supports Microsoft Windows operating systems since Windows XP SP2. It can be built with either Visual Studio or MinGW. Solaris 121 and later using GCC toolchain. Linux 2.6 or better using the GCC toolchain. Macinotsh Darwin using the GCC or XCode toolchain. It is known to work on the BSDs but we do not check the build regularly.
+
+In addition to Node v0.5, a number of projects have begun to use libuv:
+
+We hope to see more people contributing and using libuv in the future!
diff --git a/doc/blog/Uncategorized/node-meetup-this-thursday.md b/doc/blog/Uncategorized/node-meetup-this-thursday.md
new file mode 100644
index 00000000000..0dfb98dae50
--- /dev/null
+++ b/doc/blog/Uncategorized/node-meetup-this-thursday.md
@@ -0,0 +1,11 @@
+title: Node Meetup this Thursday
+author: ryandahl
+date: Tue Aug 02 2011 21:37:02 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: node-meetup-this-thursday
+
+http://nodejs.org/meetup/
+http://nodemeetup.eventbrite.com/
+
+Three companies will describe their distributed Node applications. Sign up soon, space is limited!
diff --git a/doc/blog/Uncategorized/node-office-hours-cut-short.md b/doc/blog/Uncategorized/node-office-hours-cut-short.md
new file mode 100644
index 00000000000..48d0344057a
--- /dev/null
+++ b/doc/blog/Uncategorized/node-office-hours-cut-short.md
@@ -0,0 +1,12 @@
+title: Node Office Hours Cut Short
+author: ryandahl
+date: Thu Apr 28 2011 09:04:35 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: node-office-hours-cut-short
+
+This week office hours are only from 4pm to 6pm. Isaac will be in the Joyent office in SF - everyone else is out of town. Sign up at http://nodeworkup.eventbrite.com/ if you would like to come.
+
+The week after, Thursday May 5th, we will all be at NodeConf in Portland.
+
+Normal office hours resume Thursday May 12th.
diff --git a/doc/blog/Uncategorized/office-hours.md b/doc/blog/Uncategorized/office-hours.md
new file mode 100644
index 00000000000..e4c94992ce7
--- /dev/null
+++ b/doc/blog/Uncategorized/office-hours.md
@@ -0,0 +1,12 @@
+title: Office Hours
+author: ryandahl
+date: Wed Mar 23 2011 21:42:47 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: office-hours
+
+Starting next Thursday Isaac, Tom, and I will be holding weekly office hours at Joyent HQ in San Francisco. Office hours are meant to be subdued working time - there are no talks and no alcohol. Bring your bugs or just come and hack with us.
+
+Our building requires that everyone attending be on a list so you must sign up at Event Brite.
+
+We start at 4p and end promptly at 8p.
diff --git a/doc/blog/Uncategorized/porting-node-to-windows-with-microsoft%e2%80%99s-help.md b/doc/blog/Uncategorized/porting-node-to-windows-with-microsoft%e2%80%99s-help.md
new file mode 100644
index 00000000000..d2be3e3ba2a
--- /dev/null
+++ b/doc/blog/Uncategorized/porting-node-to-windows-with-microsoft%e2%80%99s-help.md
@@ -0,0 +1,12 @@
+title: Porting Node to Windows With Microsoft’s Help
+author: ryandahl
+date: Thu Jun 23 2011 15:22:58 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: porting-node-to-windows-with-microsoft%e2%80%99s-help
+
+I'm pleased to announce that Microsoft is partnering with Joyent in formally contributing resources towards porting Node to Windows. As you may have heard in a talk we gave earlier this year, we have started the undertaking of a native port to Windows - targeting the high-performance IOCP API.
+
+This requires a rather large modification of the core structure, and we're very happy to have official guidance and engineering resources from Microsoft. Rackspace is also contributing Bert Belder's time to this undertaking.
+
+The result will be an official binary node.exe releases on nodejs.org, which will work on Windows Azure and other Windows versions as far back as Server 2003.
diff --git a/doc/blog/Uncategorized/profiling-node-js.md b/doc/blog/Uncategorized/profiling-node-js.md
new file mode 100644
index 00000000000..ff259c3c42c
--- /dev/null
+++ b/doc/blog/Uncategorized/profiling-node-js.md
@@ -0,0 +1,60 @@
+title: Profiling Node.js
+author: Dave Pacheco
+date: Wed Apr 25 2012 13:48:58 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: profiling-node-js
+
+It's incredibly easy to visualize where your Node program spends its time using DTrace and node-stackvis (a Node port of Brendan Gregg's FlameGraph tool):
+
+
+
+ This will sample about 100 times per second for 60 seconds and emit results to stacks.out. Note that this will sample all running programs called "node". If you want a specific process, replace execname == "node" with pid == 12345 (the process id).
+
+
Use the "stackvis" tool to transform this directly into a flame graph. First, install it:
+
$ npm install -g stackvis
+ then use stackvis to convert the DTrace output to a flamegraph:
+
+
+
+You'll be looking at something like this:
+
+
+
+This is a visualization of all of the profiled call stacks. This example is from the "hello world" HTTP server on the Node.js home page under load. Start at the bottom, where you have "main", which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you'll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time.
+
+That's the summary. There are a few prerequisites:
+
+
+
You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means illumos-based systems like SmartOS, including the Joyent Cloud. MacOS users: OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liason (if you're lucky enough to have one) or file a bug report at bugreport.apple.com. I'd suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.
+
You must be on 32-bit Node.js 0.6.7 or later, built --with-dtrace. The helper doesn't work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default.
+
+
+There are a few other notes:
+
+
+
You can absolutely profile apps in production, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program.
+
You may want to run the stacks.out output through c++filt to demangle C++ symbols. Be sure to use the c++filt that came with the compiler you used to build Node. For example:
+
c++filt < stacks.out > demangled.out
+ then you can use demangled.out to create the flamegraph.
+
+
If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want:
+
If you've used Brendan's FlameGraph tools, you'll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I've also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.)
+
+
+
+For more on the underlying pieces, see my previous post on Node.js profiling and Brendan's post on Flame Graphs.
+
+
+
+Dave Pacheco blogs at dtrace.org
diff --git a/doc/blog/Uncategorized/some-new-node-projects.md b/doc/blog/Uncategorized/some-new-node-projects.md
new file mode 100644
index 00000000000..77515af7364
--- /dev/null
+++ b/doc/blog/Uncategorized/some-new-node-projects.md
@@ -0,0 +1,13 @@
+title: Some New Node Projects
+author: ryandahl
+date: Mon Aug 29 2011 08:30:41 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: some-new-node-projects
+
+
+
Superfeedr released a Node XMPP Server. "Since astro had been doing an amazing work with his node-xmpp library to build Client, Components and even Server to server modules, the logical next step was to try to build a Client to Server module so that we could have a full blown server. That’s what we worked on the past couple days, and it’s now on Github!
+
+
Joyent's Mark Cavage released LDAP.js. "ldapjs is a pure JavaScript, from-scratch framework for implementing LDAP clients and servers in Node.js. It is intended for developers used to interacting with HTTP services in node and express.
+
+
Microsoft's Tomasz Janczuk released iisnode "The iisnode project provides a native IIS 7.x module that allows hosting of node.js applications in IIS.
Scott Hanselman posted a detailed walkthrough of how to get started with iisnode
diff --git a/doc/blog/Uncategorized/the-videos-from-node-meetup.md b/doc/blog/Uncategorized/the-videos-from-node-meetup.md
new file mode 100644
index 00000000000..aa2ce5ac564
--- /dev/null
+++ b/doc/blog/Uncategorized/the-videos-from-node-meetup.md
@@ -0,0 +1,10 @@
+title: The Videos from the Meetup
+author: ryandahl
+date: Fri Aug 12 2011 00:14:34 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: the-videos-from-node-meetup
+
+Uber, Voxer, and Joyent described how they use Node in production
+
+http://joyeur.com/2011/08/11/node-js-meetup-distributed-web-architectures/
diff --git a/doc/blog/Uncategorized/trademark.md b/doc/blog/Uncategorized/trademark.md
new file mode 100644
index 00000000000..fc89cfa044d
--- /dev/null
+++ b/doc/blog/Uncategorized/trademark.md
@@ -0,0 +1,17 @@
+title: Trademark
+author: ryandahl
+date: Fri Apr 29 2011 01:54:18 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: trademark
+
+One of the things Joyent accepted when we took on the Node project was to provide resources to help the community grow. The Node project is amazing because of the expertize, dedication and hard work of the community. However in all communities there is the possibility of people acting inappropriately. We decided to introduce trademarks on the “Node.js” and the “Node logo” in order to ensure that people or organisations who are not investing in the Node community misrepresent, or create confusion about the role of themselves or their products with Node.
+
+We are big fans of the people who have contributed to Node and we have worked hard to make sure that existing members of the community will be unaffected by this change. For most people they don’t have to do anything they are free to use the Node.js marks in their free open source projects (see guidelines). For others we’ve already granted them licenses to use Node.js marks in their domain names and their businesses. We value all of these contributions to the Node community and hope that we can continue to protect their good names and hard work.
+
+Where does our trademark policy come from? We started by looking at popular open source foundations like the Apache Software Foundation and Linux. By strongly basing our policy on the one used by the Apache Software Foundation we feel that we’ve created a policy which is liberal enough to allow the open source community to easily make use of the mark in the context of free open source software, but secure enough to protect the community’s work from being misrepresented by other organisations.
+
+While we realise that any changes involving lawyers can be intimidating to the community we want to make this transition as smoothly as possible and welcome your questions and feedback on the policy and how we are implementing it.
+
+http://nodejs.org/trademark-policy.pdf
+trademark@joyent.com
diff --git a/doc/blog/Uncategorized/version-0-6.md b/doc/blog/Uncategorized/version-0-6.md
new file mode 100644
index 00000000000..9ab82985f97
--- /dev/null
+++ b/doc/blog/Uncategorized/version-0-6.md
@@ -0,0 +1,12 @@
+title: Version 0.6 Coming Soon
+author: ryandahl
+date: Tue Oct 25 2011 15:26:23 GMT-0700 (PDT)
+status: publish
+category: Uncategorized
+slug: version-0-6
+
+Version 0.6.0 will be released next week. Please spend some time this week upgrading your code to v0.5.10. Report any API differences at https://github.com/joyent/node/wiki/API-changes-between-v0.4-and-v0.6 or report a bug to us at http://github.com/joyent/node/issues if you hit problems.
+
+The API changes between v0.4.12 and v0.5.10 are 99% cosmetic, minor, and easy to fix. Most people are able to migrate their code in 10 minutes. Don't fear.
+
+Once you've ported your code to v0.5.10 please help out by testing third party modules. Make bug reports. Encourage authors to publish new versions of their modules. Go through the list of modules at http://search.npmjs.org/ and try out random ones. This is especially encouraged of Windows users!
diff --git a/doc/blog/module/multi-server-continuous-deployment-with-fleet.md b/doc/blog/module/multi-server-continuous-deployment-with-fleet.md
new file mode 100644
index 00000000000..7d76ad894cb
--- /dev/null
+++ b/doc/blog/module/multi-server-continuous-deployment-with-fleet.md
@@ -0,0 +1,89 @@
+title: multi-server continuous deployment with fleet
+author: Isaac Schlueter
+date: Wed May 02 2012 11:00:00 GMT-0700 (PDT)
+status: publish
+category: module
+slug: multi-server-continuous-deployment-with-fleet
+
+
This is a guest post by James "SubStack" Halliday, originally posted on his blog, and reposted here with permission.
+
+
Writing applications as a sequence of tiny services that all talk to each other over the network has many upsides, but it can be annoyingly tedious to get all the subsystems up and running.
+
+
Running a seaport can help with getting all the services to talk to each other, but running the processes is another matter, especially when you have new code to push into production.
+
+
fleet aims to make it really easy for anyone on your team to push new code from git to an armada of servers and manage all the processes in your stack.
+
+
To start using fleet, just install the fleet command with npm:
+
+
npm install -g fleet
+
+
Then on one of your servers, start a fleet hub. From a fresh directory, give it a passphrase and a port to listen on:
+
+
fleet hub --port=7000 --secret=beepboop
+
+
Now fleet is listening on :7000 for commands and has started a git server on :7001 over http. There's no ssh keys or post commit hooks to configure, just run that command and you're ready to go!
+
+
Next set up some worker drones to run your processes. You can have as many workers as you like on a single server but each worker should be run from a separate directory. Just do:
+
+
fleet drone --hub=x.x.x.x:7000 --secret=beepboop
+
+
where x.x.x.x is the address where the fleet hub is running. Spin up a few of these drones.
+
+
Now navigate to the directory of the app you want to deploy. First set a remote so you don't need to type --hub and --secret all the time.
Fleet just created a fleet.json file for you to save your settings.
+
+
From the same app directory, to deploy your code just do:
+
+
fleet deploy
+
+
The deploy command does a git push to the fleet hub's git http server and then the hub instructs all the drones to pull from it. Your code gets checked out into a new directory on all the fleet drones every time you deploy.
+
+
Because fleet is designed specifically for managing applications with lots of tiny services, the deploy command isn't tied to running any processes. Starting processes is up to the programmer but it's super simple. Just use the fleet spawn command:
+
+
fleet spawn -- node server.js 8080
+
+
By default fleet picks a drone at random to run the process on. You can specify which drone you want to run a particular process on with the --drone switch if it matters.
+
+
Start a few processes across all your worker drones and then show what is running with the fleet ps command:
Now suppose that you have new code to push out into production. By default, fleet lets you spin up new services without disturbing your existing services. If you fleet deploy again after checking in some new changes to git, the next time you fleet spawn a new process, that process will be spun up in a completely new directory based on the git commit hash. To stop a process, just use fleet stop.
+
+
This approach lets you verify that the new services work before bringing down the old services. You can even start experimenting with heterogeneous and incremental deployment by hooking into a custom http proxy!
+
+
Even better, if you use a service registry like seaport for managing the host/port tables, you can spin up new ad-hoc staging clusters all the time without disrupting the normal operation of your site before rolling out new code to users.
+
+
Fleet has many more commands that you can learn about with its git-style manpage-based help system! Just do fleet help to get a list of all the commands you can run.
+
+
fleet help
+Usage: fleet <command> [<args>]
+
+The commands are:
+ deploy Push code to drones.
+ drone Connect to a hub as a worker.
+ exec Run commands on drones.
+ hub Create a hub for drones to connect.
+ monitor Show service events system-wide.
+ ps List the running processes on the drones.
+ remote Manage the set of remote hubs.
+ spawn Run services on drones.
+ stop Stop processes running on drones.
+
+For help about a command, try `fleet help `.
Service logs are gold, if you can mine them. We scan them for occasional debugging. Perhaps we grep them looking for errors or warnings, or setup an occasional nagios log regex monitor. If that. This is a waste of the best channel for data about a service.
non real-time analysis (business or operational analysis)
+
historical analysis
+
+
+
These are what logs are good for. The current state of logging is barely adequate for the first of these. Doing reliable analysis, and even monitoring, of varied "printf-style" logs is a grueling or hacky task that most either don't bother with, fallback to paying someone else to do (viz. Splunk's great successes), or, for web sites, punt and use the plethora of JavaScript-based web analytics tools.
+
+
Let's log in JSON. Let's format log records with a filter outside the app. Let's put more info in log records by not shoehorning into a printf-message. Debuggability can be improved. Monitoring and analysis can definitely be improved. Let's not write another regex-based parser, and use the time we've saved writing tools to collate logs from multiple nodes and services, to query structured logs (from all services, not just web servers), etc.
+
+
At Joyent we use node.js for running many core services -- loosely coupled through HTTP REST APIs and/or AMQP. In this post I'll draw on experiences from my work on Joyent's SmartDataCenter product and observations of Joyent Cloud operations to suggest some improvements to service logging. I'll show the (open source) Bunyan logging library and tool that we're developing to improve the logging toolchain.
+
+
Current State of Log Formatting
+
+
# apache access log
+10.0.1.22 - - [15/Oct/2010:11:46:46 -0700] "GET /favicon.ico HTTP/1.1" 404 209
+fe80::6233:4bff:fe29:3173 - - [15/Oct/2010:11:46:58 -0700] "GET / HTTP/1.1" 200 44
+
+# apache error log
+[Fri Oct 15 11:46:46 2010] [error] [client 10.0.1.22] File does not exist: /Library/WebServer/Documents/favicon.ico
+[Fri Oct 15 11:46:58 2010] [error] [client fe80::6233:4bff:fe29:3173] File does not exist: /Library/WebServer/Documents/favicon.ico
+
+# Mac /var/log/secure.log
+Oct 14 09:20:56 banana loginwindow[41]: in pam_sm_authenticate(): Failed to determine Kerberos principal name.
+Oct 14 12:32:20 banana com.apple.SecurityServer[25]: UID 501 authenticated as user trentm (UID 501) for right 'system.privilege.admin'
+
+# an internal joyent agent log
+[2012-02-07 00:37:11.898] [INFO] AMQPAgent - Publishing success.
+[2012-02-07 00:37:11.910] [DEBUG] AMQPAgent - { req_id: '8afb8d99-df8e-4724-8535-3d52adaebf25',
+ timestamp: '2012-02-07T00:37:11.898Z',
+
+# typical expressjs log output
+[Mon, 21 Nov 2011 20:52:11 GMT] 200 GET /foo (1ms)
+Blah, some other unstructured output to from a console.log call.
+
+
+
What're we doing here? Five logs at random. Five different date formats. As Paul Querna points out we haven't improved log parsability in 20 years. Parsability is enemy number one. You can't use your logs until you can parse the records, and faced with the above the inevitable solution is a one-off regular expression.
+
+
The current state of the art is various parsing libs, analysistools and homebrew scripts ranging from grep to Perl, whose scope is limited to a few niches log formats.
+
+
JSON for Logs
+
+
JSON.parse() solves all that. Let's log in JSON. But it means a change in thinking: The first-level audience for log files shouldn't be a person, but a machine.
+
+
That is not said lightly. The "Unix Way" of small focused tools lightly coupled with text output is important. JSON is less "text-y" than, e.g., Apache common log format. JSON makes grep and awk awkward. Using less directly on a log is handy.
+
+
But not handy enough. That 80's pastel jumpsuit awkwardness you're feeling isn't the JSON, it's your tools. Time to find a json tool -- json is one, bunyan described below is another one. Time to learn your JSON library instead of your regex library: JavaScript, Python, Ruby, Java, Perl.
+
+
Time to burn your log4j Layout classes and move formatting to the tools side. Creating a log message with semantic information and throwing that away to make a string is silly. The win at being able to trivially parse log records is huge. The possibilities at being able to add ad hoc structured information to individual log records is interesting: think program state metrics, think feeding to Splunk, or loggly, think easy audit logs.
+
+
Introducing Bunyan
+
+
Bunyan is a node.js module for logging in JSON and a bunyan CLI tool to view those logs.
Bunyan is log4j-like: create a Logger with a name, call log.info(...), etc. However it has no intention of reproducing much of the functionality of log4j. IMO, much of that is overkill for the types of services you'll tend to be writing with node.js.
+
+
Longer Bunyan Example
+
+
Let's walk through a bigger example to show some interesting things in Bunyan. We'll create a very small "Hello API" server using the excellent restify library -- which we used heavily here at Joyent. (Bunyan doesn't require restify at all, you can easily use Bunyan with Express or whatever.)
+
+
You can follow along in https://github.com/trentm/hello-json-logging if you like. Note that I'm using the current HEAD of the bunyan and restify trees here, so details might change a bit. Prerequisite: a node 0.6.x installation.
Every Bunyan logger must have a name. Unlike log4j, this is not a hierarchical dotted namespace. It is just a name field for the log records.
+
+
Every Bunyan logger has one or more streams, to which log records are written. Here we've defined two: logging at DEBUG level and above is written to stdout, and logging at TRACE and above is appended to 'hello.log'.
+
+
Bunyan has the concept of serializers: a registry of functions that know how to convert a JavaScript object for a certain log record field to a nice JSON representation for logging. For example, here we register the Logger.stdSerializers.req function to convert HTTP Request objects (using the field name "req") to JSON. More on serializers later.
+
+
Restify Server
+
+
Restify 1.x and above has bunyan support baked in. You pass in your Bunyan logger like this:
+
+
var server = restify.createServer({
+ name: 'Hello API',
+ log: log // Pass our logger to restify.
+});
+
+
+
Our simple API will have a single GET /hello?name=NAME endpoint:
If we run that, node server.js, and call the endpoint, we get the expected restify response:
+
+
$ curl -iSs http://0.0.0.0:8080/hello?name=paul
+HTTP/1.1 200 OK
+Access-Control-Allow-Origin: *
+Access-Control-Allow-Headers: Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version
+Access-Control-Expose-Headers: X-Api-Version, X-Request-Id, X-Response-Time
+Server: Hello API
+X-Request-Id: f6aaf942-c60d-4c72-8ddd-bada459db5e3
+Access-Control-Allow-Methods: GET
+Connection: close
+Content-Length: 16
+Content-MD5: Xmn3QcFXaIaKw9RPUARGBA==
+Content-Type: application/json
+Date: Tue, 07 Feb 2012 19:12:35 GMT
+X-Response-Time: 4
+
+{"hello":"paul"}
+
+
+
Setup Server Logging
+
+
Let's add two things to our server. First, we'll use the server.pre to hook into restify's request handling before routing where we'll log the request.
This is the first time we've seen this log.info style with an object as the first argument. Bunyan logging methods (log.trace, log.debug, ...) all support an optional first object argument with extra log record fields:
+
+
log.info(<object> fields, <string> msg, ...)
+
+
+
Here we pass in the restify Request object, req. The "req" serializer we registered above will come into play here, but bear with me.
+
+
Remember that we already had this debug log statement in our endpoint handler:
+
+
req.log.debug('caller is "%s"', caller); // (2)
+
+
+
Second, use the restify server after event to log the response:
The last two log messages include a "req_id" field (added to the req.log logger by restify). Note that this is the same UUID as the "X-Request-Id" header in the curl response. This means that if you use req.log for logging in your API handlers you will get an easy way to collate all logging for particular requests.
+
+
If your's is an SOA system with many services, a best practice is to carry that X-Request-Id/req_id through your system to enable collating handling of a single top-level request.
+
The last two log messages include a "route" field. This tells you to which handler restify routed the request. While possibly useful for debugging, this can be very helpful for log-based monitoring of endpoints on a server.
+
+
+
Recall that we also setup all logging to go the "hello.log" file. This was set at the TRACE level. Restify will log more detail of its operation at the trace level. See my "hello.log" for an example. The bunyan tool does a decent job of nicely formatting multiline messages and "req"/"res" keys (with color, not shown in the gist).
+
+
This is logging you can use effectively.
+
+
Other Tools
+
+
Bunyan is just one of many options for logging in node.js-land. Others (that I know of) supporting JSON logging are winston and logmagic. Paul Querna has an excellent post on using JSON for logging, which shows logmagic usage and also touches on topics like the GELF logging format, log transporting, indexing and searching.
+
+
Final Thoughts
+
+
Parsing challenges won't ever completely go away, but it can for your logs if you use JSON. Collating log records across logs from multiple nodes is facilitated by a common "time" field. Correlating logging across multiple services is enabled by carrying a common "req_id" (or equivalent) through all such logs.
+
+
Separate log files for a single service is an anti-pattern. The typical Apache example of separate access and error logs is legacy, not an example to follow. A JSON log provides the structure necessary for tooling to easily filter for log records of a particular type.
+
+
JSON logs bring possibilities. Feeding to tools like Splunk becomes easy. Ad hoc fields allow for a lightly spec'd comm channel from apps to other services: records with a "metric" could feed to statsd, records with a "loggly: true" could feed to loggly.com.
+
+
Here I've described a very simple example of restify and bunyan usage for node.js-based API services with easy JSON logging. Restify provides a powerful framework for robust API services. Bunyan provides a light API for nice JSON logging and the beginnings of tooling to help consume Bunyan JSON logs.
+
+
Update (29-Mar-2012): Fix styles somewhat for RSS readers.
diff --git a/doc/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md b/doc/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md
new file mode 100644
index 00000000000..26805b7a4c8
--- /dev/null
+++ b/doc/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md
@@ -0,0 +1,167 @@
+title: Managing Node.js Dependencies with Shrinkwrap
+author: Dave Pacheco
+date: Mon Feb 27 2012 10:51:59 GMT-0800 (PST)
+status: publish
+category: npm
+slug: managing-node-js-dependencies-with-shrinkwrap
+
+
+Photo by Luc Viatour (flickr)
+
+
Managing dependencies is a fundamental problem in building complex software. The terrific success of github and npm have made code reuse especially easy in the Node world, where packages don't exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using semantic versioning, allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly.
+
+
+
This does mean that when you "npm install" a package with dependencies, there's no guarantee that you'll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren't available an hour ago. This is great during development, where you want to keep up with changes upstream. It's not necessarily what you want for deployment, though, where you want to validate whatever bits you're actually shipping.
+
+
+
Put differently, it's understood that all software changes incur some risk, and it's critical to be able to manage this risk on your own terms. Taking that risk in development is good because by definition that's when you're incorporating and testing software changes. On the other hand, if you're shipping production software, you probably don't want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship.
+
+
+
You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don't also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach?
+
+
NAME
+ npm-shrinkwrap -- Lock down dependency versions
+
+SYNOPSIS
+ npm shrinkwrap
+
+DESCRIPTION
+ This command locks down the versions of a package's dependencies so
+ that you can control exactly which versions of each dependency will
+ be used when your package is installed.
If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install:
+
+
+
A@0.1.0
+└─┬ B@0.0.1
+ └── C@0.0.1
+
Then if B@0.0.2 is published, then a fresh "npm install A" will install:
+
+
+
A@0.1.0
+└─┬ B@0.0.2
+ └── C@0.0.1
+
assuming the new version did not modify B's dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A's author doesn't want that, she could specify a dependency on B@0.0.1. But if A's author and B's author are not the same person, there's no way for A's author to say that she does not want to pull in newly published versions of C when B hasn't changed at all.
+
+
+
In this case, A's author can use
+
+
+
# npm shrinkwrap
+
This generates npm-shrinkwrap.json, which will look something like this:
+
+
The shrinkwrap command has locked down the dependencies based on what's currently installed in node_modules. When "npm install" installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively). So now the author publishes A@0.1.0, and subsequent installs of this package will use B@0.0.1 and C@0.1.0, regardless the dependencies and versions listed in A's, B's, and C's package.json files. If the authors of B and C publish new versions, they won't be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since "npm shrinkwrap" uses what's installed locally rather than what's available in the registry.
+
+
+
Using shrinkwrapped packages
+
Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package.json file and "npm install" it.
+
+
+
Building shrinkwrapped packages
+
To shrinkwrap an existing package:
+
+
+
+
Run "npm install" in the package root to install the current versions of all dependencies.
+
Validate that the package works as expected with these versions.
+
Run "npm shrinkwrap", add npm-shrinkwrap.json to git, and publish your package.
+
+
To add or update a dependency in a shrinkwrapped package:
+
+
+
+
Run "npm install" in the package root to install the current versions of all dependencies.
+
Add or update dependencies. "npm install" each new or updated package individually and then update package.json.
+
Validate that the package works as expected with the new dependencies.
+
Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and publish your package.
+
+
You can still use npm outdated(1) to view which dependencies have newer versions available.
+
+
+
For more details, check out the full docs on npm shrinkwrap, from which much of the above is taken.
+
+
+
Why not just check node_modules into git?
+
One previously proposed solution is to "npm install" your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you've got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to "npm install" them to get their sources, but this builds the [system-dependent] binary too. You can avoid checking in the binaries and use "npm rebuild" at build time, but we've had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3]
+
+
+
Besides the issues with binary modules, this approach just felt wrong to many of us. There's a reason we don't check binaries into source control, and it's not just because they're platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It's because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn't regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don't check in files derived directly from other files by a simple transformation.[4] Instead, you check in the original sources and automate the transformations via the build process.
+
+
+
Dependencies are just like binaries in this regard: they're files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest?
+
+
+
Finally, the approach of checking in node_modules doesn't really scale for us. We've got at least a dozen repos that will use restify, and it doesn't make sense to check that in everywhere when we could instead just specify which version each one is using. There's another principle at work here, which is separation of concerns: each repo specifies what it needs, while the build process figures out where to get it.
+
+
+
What if an author republishes an existing version of a package?
+
We're not suggesting deploying a shrinkwrapped package directly and running "npm install" to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That's where we do the "npm install". We tar up the result and distribute the tarball. Since we test each build before shipping, we won't deploy something we didn't test.
+
+
+
It's still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you're worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes.
+
+
+
Final thoughts
+
Of course, the details of each use case matter a lot, and the world doesn't have to pick just one solution. If you like checking in node_modules, you should keep doing that. We've chosen the shrinkwrap route because that works better for us.
+
+
+
It's not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That's why it's so important to us to have robust components (like logging and REST) and tools for understanding production failures post mortem, profile Node apps in production, and now managing Node dependencies. Again, we're interested to hear feedback from others using these tools.
+
+
[1] Much of this section is taken directly from the "npm shrinkwrap" documentation.
+
+
+
[2] We've had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (.o, .node, .dynlib, .so, *.a, ...). When Mark went to apply this to one of our internal services, the "npm rebuild" step blew away half of the dependency tree because it ran "make clean", which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren't using binary modules (which we know isn't true) or haven't had much better luck with this approach.
+
+
+
[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn't a corner case for us, and we don't believe we're exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module.
+
+
+
[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I'm not interested in proselytizing; if you want to do this for software version control too, go ahead. But don't do it out of ignorance of existing successful software engineering practices.
diff --git a/doc/blog/npm/npm-1-0-global-vs-local-installation.md b/doc/blog/npm/npm-1-0-global-vs-local-installation.md
new file mode 100644
index 00000000000..7c16003d6fa
--- /dev/null
+++ b/doc/blog/npm/npm-1-0-global-vs-local-installation.md
@@ -0,0 +1,64 @@
+title: npm 1.0: Global vs Local installation
+author: Isaac Schlueter
+date: Wed Mar 23 2011 23:07:13 GMT-0700 (PDT)
+status: publish
+category: npm
+slug: npm-1-0-global-vs-local-installation
+
+
More than anything else, the driving force behind the npm 1.0 rearchitecture was the desire to simplify what a package installation directory structure looks like.
+
+
In npm 0.x, there was a command called bundle that a lot of people liked. bundle let you install your dependencies locally in your project, but even still, it was basically a hack that never really worked very reliably.
+
+
Also, there was that activation/deactivation thing. That’s confusing.
+
+
Two paths
+
+
In npm 1.0, there are two ways to install things:
+
+
globally —- This drops modules in {prefix}/lib/node_modules, and puts executable files in {prefix}/bin, where {prefix} is usually something like /usr/local. It also installs man pages in {prefix}/share/man, if they’re supplied.
locally —- This installs your package in the current working directory. Node modules go in ./node_modules, executables go in ./node_modules/.bin/, and man pages aren’t installed at all.
+
+
Which to choose
+
+
Whether to install a package globally or locally depends on the global config, which is aliased to the -g command line switch.
+
+
Just like how global variables are kind of gross, but also necessary in some cases, global packages are important, but best avoided if not needed.
+
+
In general, the rule of thumb is:
+
+
If you’re installing something that you want to use in your program, using require('whatever'), then install it locally, at the root of your project.
If you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable.
+
+
When you can't choose
+
+
Of course, there are some cases where you want to do both. Coffee-script and Express both are good examples of apps that have a command line interface, as well as a library. In those cases, you can do one of the following:
+
+
Install it in both places. Seriously, are you that short on disk space? It’s fine, really. They’re tiny JavaScript programs.
Install it globally, and then npm link coffee-script or npm link express (if you’re on a platform that supports symbolic links.) Then you only need to update the global copy to update all the symlinks as well.
+
+
The first option is the best in my opinion. Simple, clear, explicit. The second is really handy if you are going to re-use the same library in a bunch of different projects. (More on npm link in a future installment.)
+
+
You can probably think of other ways to do it by messing with environment variables. But I don’t recommend those ways. Go with the grain.
+
+
Slight exception: It’s not always the cwd.
+
+
Let’s say you do something like this:
+
+
cd ~/projects/foo # go into my project
+npm install express # ./node_modules/express
+cd lib/utils # move around in there
+vim some-thing.js # edit some stuff, work work work
+npm install redis # ./lib/utils/node_modules/redis!? ew.
+
+
In this case, npm will install redis into ~/projects/foo/node_modules/redis. Sort of like how git will work anywhere within a git repository, npm will work anywhere within a package, defined by having a node_modules folder.
+
+
Test runners and stuff
+
+
If your package's scripts.test command uses a command-line program installed by one of your dependencies, not to worry. npm makes ./node_modules/.bin the first entry in the PATH environment variable when running any lifecycle scripts, so this will work fine, even if your program is not globally installed:
+
+
In npm 0.x, there was a command called link. With it, you could “link-install” a package so that changes would be reflected in real-time. This is especially handy when you’re actually building something. You could make a few changes, run the command again, and voila, your new code would be run without having to re-install every time.
+
+
Of course, compiled modules still have to be rebuilt. That’s not ideal, but it’s a problem that will take more powerful magic to solve.
+
+
In npm 0.x, this was a pretty awful kludge. Back then, every package existed in some folder like:
+
+
prefix/lib/node/.npm/my-package/1.3.6/package
+
+
+
and the package’s version and name could be inferred from the path. Then, symbolic links were set up that looked like:
It was easy enough to point that symlink to a different location. However, since the package.json file could change, that meant that the connection between the version and the folder was not reliable.
+
+
At first, this was just sort of something that we dealt with by saying, “Relink if you change the version.” However, as more and more edge cases arose, eventually the solution was to give link packages this fakey version of “9999.0.0-LINK-hash” so that npm knew it was an imposter. Sometimes the package was treated as if it had the 9999.0.0 version, and other times it was treated as if it had the version specified in the package.json.
+
+
A better way
+
+
For npm 1.0, we backed up and looked at what the actual use cases were. Most of the time when you link something you want one of the following:
+
+
+
globally install this package I’m working on so that I can run the command it creates and test its stuff as I work on it.
+
locally install my thing into some other thing that depends on it, so that the other thing can require() it.
+
+
+
And, in both cases, changes should be immediately apparent and not require any re-linking.
+
+
Also, there’s a third use case that I didn’t really appreciate until I started writing more programs that had more dependencies:
+
+
Globally install something, and use it in development in a bunch of projects, and then update them all at once so that they all use the latest version.
+
+
Really, the second case above is a special-case of this third case.
+
+
Link devel → global
+
+
The first step is to link your local project into the global install space. (See global vs local installation for more on this global/local business.)
+
+
I do this as I’m developing node projects (including npm itself).
+
+
cd ~/dev/js/node-tap # go into the project dir
+npm link # create symlinks into {prefix}
+
+
+
Because of how I have my computer set up, with /usr/local as my install prefix, I end up with a symlink from /usr/local/lib/node_modules/tap pointing to ~/dev/js/node-tap, and the executable linked to /usr/local/bin/tap.
+
+
Of course, if you set your paths differently, then you’ll have different results. (That’s why I tend to talk in terms of prefix rather than /usr/local.)
+
+
Link global → local
+
+
When you want to link the globally-installed package into your local development folder, you run npm link pkg where pkg is the name of the package that you want to install.
+
+
For example, let’s say that I wanted to write some tap tests for my node-glob package. I’d first do the steps above to link tap into the global install space, and then I’d do this:
+
+
cd ~/dev/js/node-glob # go to the project that uses the thing.
+npm link tap # link the global thing into my project.
+
+
+
Now when I make changes in ~/dev/js/node-tap, they’ll be immediately reflected in ~/dev/js/node-glob/node_modules/tap.
+
+
Link to stuff you don’t build
+
+
Let’s say I have 15 sites that all use express. I want the benefits of local development, but I also want to be able to update all my dev folders at once. You can globally install express, and then link it into your local development folder.
+
+
npm install express -g # install express globally
+cd ~/dev/js/my-blog # development folder one
+npm link express # link the global express into ./node_modules
+cd ~/dev/js/photo-site # other project folder
+npm link express # link express into here, as well
+
+ # time passes
+ # TJ releases some new stuff.
+ # you want this new stuff.
+
+npm update express -g # update the global install.
+ # this also updates my project folders.
+
+
+
Caveat: Not For Real Servers
+
+
npm link is a development tool. It’s awesome for managing packages on your local development box. But deploying with npm link is basically asking for problems, since it makes it super easy to update things without realizing it.
+
+
Caveat 2: Sorry, Windows!
+
+
I highly doubt that a native Windows node will ever have comparable symbolic link support to what Unix systems provide. I know that there are junctions and such, and I've heard legends about symbolic links on Windows 7.
+
+
When there is a native windows port of Node, if that native windows port has `fs.symlink` and `fs.readlink` support that is exactly identical to the way that they work on Unix, then this should work fine.
+
+
But I wouldn't hold my breath. Any bugs about this not working on a native Windows system (ie, not Cygwin) will most likely be closed with wontfix.
+
+
+
Aside: Credit where Credit’s Due
+
+
Back before the Great Package Management Wars of Node 0.1, before npm or kiwi or mode or seed.js could do much of anything, and certainly before any of them had more than 2 users, Mikeal Rogers invited me to the Couch.io offices for lunch to talk about this npm registry thingie I’d mentioned wanting to build. (That is, to convince me to use CouchDB for it.)
+
+
Since he was volunteering to build the first version of it, and since couch is pretty much the ideal candidate for this use-case, it was an easy sell.
+
+
While I was there, he said, “Look. You need to be able to link a project directory as if it was installed as a package, and then have it all Just Work. Can you do that?”
+
+
I was like, “Well, I don’t know… I mean, there’s these edge cases, and it doesn’t really fit with the existing folder structure very well…”
+
+
“Dude. Either you do it, or I’m going to have to do it, and then there’ll be another package manager in node, instead of writing a registry for npm, and it won’t be as good anyway. Don’t be python.”
+
+
The rest is history.
diff --git a/doc/blog/npm/npm-1-0-released.md b/doc/blog/npm/npm-1-0-released.md
new file mode 100644
index 00000000000..ea26989c670
--- /dev/null
+++ b/doc/blog/npm/npm-1-0-released.md
@@ -0,0 +1,36 @@
+title: npm 1.0: Released
+author: Isaac Schlueter
+date: Sun May 01 2011 08:09:45 GMT-0700 (PDT)
+status: publish
+category: npm
+slug: npm-1-0-released
+
+
npm 1.0 has been released. Here are the highlights:
Install script cleans up any 0.x cruft it finds. (That is, it removes old packages, so that they can be installed properly.)
Simplified “search” command. One line per package, rather than one line per version.
Renovated “completion” approach
More help topics
Simplified folder structure
+
+
The focus is on npm being a development tool, rather than an apt-wannabe.
+
+
Installing it
+
+
To get the new version, run this command:
+
+
curl http://npmjs.org/install.sh | sh
+
+
This will prompt to ask you if it’s ok to remove all the old 0.x cruft. If you want to not be asked, then do this:
+
+
curl http://npmjs.org/install.sh | clean=yes sh
+
+
Or, if you want to not do the cleanup, and leave the old stuff behind, then do this:
+
+
curl http://npmjs.org/install.sh | clean=no sh
+
+
A lot of people in the node community were brave testers and helped make this release a lot better (and swifter) than it would have otherwise been. Thanks :)
+
+
Code Freeze
+
+
npm will not have any major feature enhancements or architectural changes for at least 6 months. There are interesting developments planned that leverage npm in some ways, but it’s time to let the client itself settle. Also, I want to focus attention on some other problems for a little while.
diff --git a/doc/blog/npm/npm-1-0-the-new-ls.md b/doc/blog/npm/npm-1-0-the-new-ls.md
new file mode 100644
index 00000000000..da820a0d8b1
--- /dev/null
+++ b/doc/blog/npm/npm-1-0-the-new-ls.md
@@ -0,0 +1,144 @@
+title: npm 1.0: The New "ls"
+author: Isaac Schlueter
+date: Thu Mar 17 2011 23:22:17 GMT-0700 (PDT)
+status: publish
+category: npm
+slug: npm-1-0-the-new-ls
+
+
This is the first in a series of hopefully more than 1 posts, each detailing some aspect of npm 1.0.
+
+
In npm 0.x, the ls command was a combination of both searching the registry as well as reporting on what you have installed.
+
+
As the registry has grown in size, this has gotten unwieldy. Also, since npm 1.0 manages dependencies differently, nesting them in node_modules folder and installing locally by default, there are different things that you want to view.
+
+
The functionality of the ls command was split into two different parts. search is now the way to find things on the registry (and it only reports one line per package, instead of one line per version), and ls shows a tree view of the packages that are installed locally.
This is after I’ve done npm install semver ronn express in the npm source directory. Since express isn’t actually a dependency of npm, it shows up with that “extraneous” marker.
+
+
Let’s see what happens when we create a broken situation:
Tree views are great for human readability, but some times you want to pipe that stuff to another program. For that output, I took the same datastructure, but instead of building up a treeview string for each line, it spits out just the folders like this:
#572 Don't print result of --eval in CLI (Ben Noordhuis)
+
+
#1223 Fix http.ClientRequest crashes if end() was called twice (koichik)
+
+
#1383 Emit 'close' after all connections have closed (Felix Geisendörfer)
+
+
Add sprintf-like util.format() function (Ben Noordhuis)
+
+
Add support for TLS SNI (Fedor Indutny)
+
+
New http agent implementation. Off by default the command line flag --use-http2 will enable it. make test-http2 will run the tests for the new implementation. (Mikeal Rogers)
+
+
+
+Update: The .exe has a bug that results in incompatibility with Windows XP and Server 2003. This has been reported in issue #1592 and fixed. A new binary was made that is compatibile with the older Windows: http://nodejs.org/dist/v0.5.5/node-186364e.exe.
diff --git a/doc/blog/release/node-v0-5-6.md b/doc/blog/release/node-v0-5-6.md
new file mode 100644
index 00000000000..ca1a392b013
--- /dev/null
+++ b/doc/blog/release/node-v0-5-6.md
@@ -0,0 +1,49 @@
+version: 0.5.6
+title: Node v0.5.6 (unstable)
+author: piscisaureus
+date: Fri Sep 09 2011 16:30:39 GMT-0700 (PDT)
+status: publish
+category: release
+slug: node-v0-5-6
+
+2011.09.08, Version 0.5.6 (unstable)
+
+
#345, #1635, #1648 Documentation improvements (Thomas Shinnick, Abimanyu Raja, AJ ONeal, Koichi Kobayashi, Michael Jackson, Logan Smyth, Ben Noordhuis)
+
#650 Improve path parsing on windows (Bert Belder)
+
#752 Remove headers sent check in OutgoingMessage.getHeader() (Peter Lyons)
+
#1236, #1438, #1506, #1513, #1621, #1640, #1647 Libuv-related bugs fixed (Jorge Chamorro Bieling, Peter Bright, Luis Lavena, Igor Zinkovsky)
#1586 Make socket write encoding case-insensitive (Koichi Kobayashi)
+
#1591, #1656, #1657 Implement fs in libuv, remove libeio and pthread-win32 dependency on windows (Igor Zinkovsky, Ben Noordhuis, Ryan Dahl, Isaac Schlueter)
+
#1592 Don't load-time link against CreateSymbolicLink on windows (Peter Bright)
+
#1601 Improve API consistency when dealing with the socket underlying a HTTP client request (Mikeal Rogers)
+
#1610 Remove DigiNotar CA from trusted list (Isaac Schlueter)
+
#1617 Added some win32 os functions (Karl Skomski)
+
#1624 avoid buffer overrun with 'binary' encoding (Koichi Kobayashi)
+
#1633 make Buffer.write() always set _charsWritten (Koichi Kobayashi)
+
#1644 Windows: set executables to be console programs (Peter Bright)
+
#1651 improve inspection for sparse array (Koichi Kobayashi)
+
#1672 set .code='ECONNRESET' on socket hang up errors (Ben Noordhuis)
+
Add test case for foaf+ssl client certificate (Niclas Hoyer)
+
+
+Download: http://nodejs.org/dist/v0.5.9/node-v0.5.9.tar.gz
+
+Windows Executable: http://nodejs.org/dist/v0.5.9/node.exe
+
+Website: http://nodejs.org/docs/v0.5.9/
+
+Documentation: http://nodejs.org/docs/v0.5.9/api/
diff --git a/doc/blog/release/node-v0-6-0.md b/doc/blog/release/node-v0-6-0.md
new file mode 100644
index 00000000000..ea0e71373b5
--- /dev/null
+++ b/doc/blog/release/node-v0-6-0.md
@@ -0,0 +1,80 @@
+version: 0.6.0
+title: Node v0.6.0
+author: ryandahl
+date: Sat Nov 05 2011 02:07:10 GMT-0700 (PDT)
+status: publish
+category: release
+slug: node-v0-6-0
+
+We are happy to announce the third stable branch of Node v0.6. We will be freezing JavaScript, C++, and binary interfaces for all v0.6 releases.
+
+The major differences between v0.4 and v0.6 are
+
Native Windows support using I/O Completion Ports for sockets.
+
Integrated load balancing over multiple processes. docs
+
Better support for IPC between Node instances docs
+
+
+In order to support Windows we reworked much of the core architecture. There was some fear that our work would degrade performance on UNIX systems but this was not the case. Here is a Linux system we benched for demonstration:
+
+
v0.4.12 (linux)
v0.6.0 (linux)
+
http_simple.js /bytes/1024
5461 r/s
6263 r/s
+
io.js read
19.75 mB/s
26.63 mB/s
+
io.js write
21.60 mB/s
17.40 mB/s
+
startup.js
74.7 ms
49.6 ms
+
+Bigger is better in http and io benchmarks, smaller is better in startup. The http benchmark was done with 600 clients on a 10GE network served from three load generation machines.
+
+In the last version of Node, v0.4, we could only run Node on Windows with Cygwin. Therefore we've gotten massive improvements by targeting the native APIs. Benchmarks on the same machine:
+
+
v0.4.12 (windows)
v0.6.0 (windows)
+
http_simple.js /bytes/1024
3858 r/s
5823 r/s
+
io.js read
12.41 mB/s
26.51 mB/s
+
io.js write
12.61 mB/s
33.58 mB/s
+
startup.js
152.81 ms
52.04 ms
+
+We consider this a good intermediate stage for the Windows port. There is still work to be done. For example, we are not yet providing users with a blessed path for building addon modules in MS Visual Studio. Work will continue in later releases.
+
+For users upgrading code bases from v0.4 to v0.6 we've documented most of the issues that you will run into. Most people find the change painless. Despite the long list of changes most core APIs remain untouched.
+
+Our release cycle will be tightened dramatically now. Expect to see a new stable branch in January. We wish to eventually have our releases in sync with Chrome and V8's 6 week cycle.
+
+Thank you to everyone who contributed code, tests, docs, or sent in bug reports.
+
+Here are the changes between v0.5.12 and v0.6.0:
+
+2011.11.04, Version 0.6.0 (stable)
+
print undefined on undefined values in REPL (Nathan Rajlich)
+
doc improvements (koichik, seebees, bnoordhuis, Maciej Małecki, Jacob Kragh)
+
support native addon loading in windows (Bert Belder)
+
rename getNetworkInterfaces() to networkInterfaces() (bnoordhuis)
+
add pending accepts knob for windows (igorzi)
+
http.request(url.parse(x)) (seebees)
+
#1929 zlib Respond to 'resume' events properly (isaacs)
+
stream.pipe: Remove resume and pause events
+
test fixes for windows (igorzi)
+
build system improvements (bnoordhuis)
+
#1936 tls: does not emit 'end' from EncryptedStream (koichik)
+
#758 tls: add address(), remoteAddress/remotePort
+
#1399 http: emit Error object after .abort() (bnoordhuis)
+
#1999 fs: make mkdir() default to 0777 permissions (bnoordhuis)
+
#2001 fix pipe error codes
+
#2002 Socket.write should reset timeout timer
+
stdout and stderr are blocking when associated with file too.
+
remote debugger support on windows (Bert Belder)
+
convenience methods for zlib (Matt Robenolt)
+
process.kill support on windows (igorzi)
+
process.uptime() support on windows (igorzi)
+
Return IPv4 addresses before IPv6 addresses from getaddrinfo
diff --git a/doc/blog/release/node-v0-7-0-unstable.md b/doc/blog/release/node-v0-7-0-unstable.md
new file mode 100644
index 00000000000..3036b9228f8
--- /dev/null
+++ b/doc/blog/release/node-v0-7-0-unstable.md
@@ -0,0 +1,29 @@
+version: 0.7.0
+title: Node v0.7.0 (Unstable)
+author: ryandahl
+date: Mon Jan 16 2012 19:58:28 GMT-0800 (PST)
+status: publish
+category: release
+slug: node-v0-7-0-unstable
+
+This is the first release in the unstable v0.7 series. Almost all users will want to remain using the stable v0.6 releases.
+
+2012.01.16, Version 0.7.0 (unstable)
+
+
Upgrade V8 to 3.8.6
+
Use GYP build system on unix (Ben Noordhuis)
+
Experimenetal isolates support (Ben Noordhuis)
+
Improvements to Cluster API (Andreas Madsen)
+
Use isolates for internal debugger (Fedor Indutny)
+
diff --git a/doc/blog/release/node-v0.8.0.md b/doc/blog/release/node-v0.8.0.md
new file mode 100644
index 00000000000..118668fc923
--- /dev/null
+++ b/doc/blog/release/node-v0.8.0.md
@@ -0,0 +1,384 @@
+title: Node v0.8.0
+date: Mon Jun 25 2012 09:00:00 GMT-0700 (PDT)
+version: 0.8.0
+category: release
+author: Isaac Z. Schlueter
+slug: node-v0-8-0
+status: publish
+
+I am thrilled to announce the arrival of a new stable version of
+Node.js.
+
+Compared with the v0.6 releases of Node, this release brings significant
+improvements in many key performance metrics, as well as
+cleanup in several core APIs, and the addition of new debugging
+features.
+
+## tl;dr
+
+With version 0.8.0:
+
+1. Node got a lot faster.
+2. Node got more stable.
+3. You can do stuff with file descriptors again.
+4. The [cluster module](http://nodejs.org/api/cluster.html) is much more
+ awesome.
+5. The [domain module](http://nodejs.org/api/domain.html) was added.
+6. The repl is better.
+7. The build system changed from waf to gyp.
+8. [Some other stuff changed,
+ too.](https://github.com/joyent/node/wiki/API-changes-between-v0.6-and-v0.8)
+9. Scroll to the bottom for the links to install it.
+
+## Performance
+
+This version brings a few key enhancements in V8 and libuv that result
+in significantly improved throughput.
+
+All of these benchmarks were run on my OS X laptop, but the results are
+typical of what we're seeing on SmartOS, Linux, and Windows.
+
+```
+# io.js
+
+# 0.6.19, writes
+Wrote 1024 byte buffers: 19.428793471925395 mB/s
+Wrote 4096 byte buffers: 59.737156511350065 mB/s
+Wrote 16384 byte buffers: 83.97010664203543 mB/s
+Wrote 65536 byte buffers: 97.4184120798831 mB/s
+
+# 0.8.0, writes
+Wrote 1024 byte buffers: 61.236987140232706 mB/s +215.19%
+Wrote 4096 byte buffers: 109.05125408942203 mB/s +82.55%
+Wrote 16384 byte buffers: 182.18254691200585 mB/s +116.96%
+Wrote 65536 byte buffers: 181.91740949608877 mB/s +86.74%
+
+# v0.6.19, reads
+Read 1024 byte buffers: 29.96883241428914 mB/s
+Read 4096 byte buffers: 62.34413965087282 mB/s
+Read 16384 byte buffers: 165.7550140891762 mB/s
+Read 65536 byte buffers: 266.73779674579885 mB/s
+
+# v0.8.0, reads
+Read 1024 byte buffers: 57.63688760806916 mB/s +92.32%
+Read 4096 byte buffers: 136.7801942278758 mB/s +119.40%
+Read 16384 byte buffers: 244.8579823702253 mB/s +47.72%
+Read 65536 byte buffers: 302.2974607013301 mB/s +13.33%
+```
+
+The difference is not small. If you are writing network programs with
+node, and pushing a lot of traffic, you will notice this improvement.
+
+The speed of reading files got quite a bit faster as well:
+
+```
+# v0.6.19
+read the file 110948 times (higher is better)
+90141.32 ns per read (lower is better)
+11093.69 reads per sec (higher is better)
+
+# v0.8.0
+read the file 158193 times (higher is better) +42.58%
+63217.16 ns per read (lower is better) -29.87%
+15818.48 reads per sec (higher is better) +42.59%
+```
+
+And of course, the ubiquitous 'hello, world' http server benchmark got
+significantly faster, especially for large message sizes:
+
+```
+$ TYPE=bytes LENGTH=123 bash benchmark/http.sh 2>&1 | grep Req
+# 0.6.19
+Requests per second: 3317.24 [#/sec] (mean)
+# 0.8.0
+Requests per second: 3795.34 [#/sec] (mean) +14.41%
+
+
+$ TYPE=bytes LENGTH=1024 bash benchmark/http.sh 2>&1 | grep Req
+# v0.6.19
+Requests per second: 3258.42 [#/sec] (mean)
+# 0.8.0
+Requests per second: 3585.62 [#/sec] (mean) +10.04%
+
+
+$ TYPE=bytes LENGTH=123456 bash benchmark/http.sh 2>&1 | grep Req
+# v0.6.19
+Requests per second: 218.51 [#/sec] (mean)
+# 0.8.0
+Requests per second: 749.17 [#/sec] (mean) +242.85%
+```
+
+The difference with Unicode responses is even more pronounced:
+
+```
+$ TYPE=unicode LENGTH=1024 bash benchmark/http.sh 2>&1 | grep Req
+# v0.6.19
+Requests per second: 3228.23 [#/sec] (mean)
+# v0.8.0
+Requests per second: 3317.60 [#/sec] (mean) +2.77%
+
+$ TYPE=unicode LENGTH=12345 bash benchmark/http.sh 2>&1 | grep Req
+# v0.6.19
+Requests per second: 1703.96 [#/sec] (mean)
+# v0.8.0
+Requests per second: 2431.61 [#/sec] (mean) +42.70%
+
+$ TYPE=unicode LENGTH=55555 bash benchmark/http.sh 2>&1 | grep Req
+#v0.6.19
+Requests per second: 161.65 [#/sec] (mean)
+#v0.8.0
+Requests per second: 980.38 [#/sec] (mean) +506.48%
+
+$ TYPE=unicode LENGTH=99999 bash benchmark/http.sh 2>&1 | grep Req
+# v0.6.19
+^C # lost patience after a few hours
+# v0.8.0
+Requests per second: 252.69 [#/sec] (mean)
+```
+
+The more bytes you're pushing, and the more work you're doing, the more
+win you'll see with node 0.8 over 0.6.
+
+The vast majority of the performance boost is due to improvements in V8.
+They've been very responsive to the needs of the Node.js project. A lot
+of Node's success is due to being built on such a stellar VM.
+
+
+## Build System
+
+Since its inception Node has used the WAF build system which is a Python
+based system similar to SCons. The Chrome project recently changed to
+the GYP meta-build system from SCons. GYP generates Makefiles, Visual
+Studio project files, or XCode files depending on the target. V8, being
+part of the Chrome project, now defines its build in GYP. By using GYP,
+Node is able to:
+
+- integrate with the optimal build system on all platforms,
+- easily able to integrate V8's build process into its own, and
+- define its compilation declaratively for better manageability
+
+GYP was used already in Node v0.6 to build on Windows, but now it
+defines the build on all platforms. Node is still in the process of
+migrating external addon modules to GYP, and node-gyp is included with
+npm. In future releases, node-waf will be officially deprecated. If
+you are currently using a wscript in your addon, please migrate to gyp
+as soon as possible.
+
+
+## Stabler
+
+The transition from libev and libeio to libuv in 0.6 was somewhat
+destabilizing for many node internals. The gambit paid off: libuv is
+the obvious choice in cross-platform asynchronous IO libraries, and
+Node.js is impressively performant on both Windows and Unix. But it
+made the transition from 0.4 to 0.6 was very rocky for a lot of users.
+Libuv wasn't as mature as node, and it showed in those early releases.
+
+At this point, with very few exceptions, if your v0.6 program doesn't
+run on v0.8, it should be easy and obvious to make whatever changes are
+necessary. Libuv has come a very long way, and Node 0.8 is a simpler
+and more efficient machine as a result.
+
+See the [migration
+wiki](https://github.com/joyent/node/wiki/API-changes-between-v0.6-and-v0.8)
+for details on the specific APIs that changed.
+
+## The Return of File Descriptors
+
+In Node 0.4, there was a `listenFD` method that servers could use to
+listen on a specific file descriptor that was already bound to a socket
+or port. In 0.6, that functionality went away, largely because it was
+very Unix-specific, and couldn't be easily made to work with the new
+cross-platform libuv base.
+
+Since the most common use case for listenFD was as a method for having
+servers in multiple node processes share the same underlying handle, the
+`cluster` module was added in its place. However, this still left a lot
+of use cases unaddressed, and was a reason why some people could not use
+node 0.6 for their programs.
+
+In 0.8, we've replaced this functionality, as `server.listen({ fd:
+number })`.
+
+The other feature in node 0.4 that got dropped in 0.6 was the ability to
+pass arbitrary file descriptors as a child process's stdio, using the
+`customFds` array. In Node 0.6, `customFds` could be used to inherit
+the parent's stdio handles, but not to pass arbitrary handles or file
+descriptors to the child's stdio. Also, there was never a way to pass
+more than the standard `in, out, err` trio, so programs that expected
+FD 4 to be opened in some specific way were out of luck.
+
+In 0.8, we've added the `stdio` array on the `child_process.spawn`
+options. Pass as many file descriptors, handles, etc. as you like, and
+the child process will see them as already-opened FDs.
+
+## More Powerful Cluster
+
+The cluster module in 0.8 is so much improved over 0.6, it's basically a
+complete rewrite. The API is mostly backwards compatible, but not
+entirely. (See the [migration
+wiki](https://github.com/joyent/node/wiki/API-changes-between-v0.6-and-v0.8)
+for details.)
+
+Barring these very minor API changes, if you were using cluster in 0.6,
+then your program will still work, but it'll be faster and more
+well-behaved now. And if you aren't taking advantage of the new
+features in 0.8's cluster, you're really missing out.
+
+There's too much even to do it justice here. Go read [the API
+docs](http://nodejs.org/api/cluster.html).
+
+## Domains
+
+The original idea for Domains was a way to join multiple different IO
+actions, so that you can have some context when an error occurs.
+
+Since Ryan discussed the feature with node users at NodeConf Summer Camp
+last year, the domains feature has gone through many revisions. The
+problem is fairly well understood, but most attempts to solve it
+resulted in serious performance regressions, or uncovered difficult edge
+cases.
+
+What we ended up with in 0.8 is a very stripped-down version of this
+idea. It's entirely opt-in, with minimal performance impact when it's
+used (and none when it isn't). There are a lot of examples in [the API
+documentation](http://nodejs.org/api/domain.html), so check them out,
+and start handling your crashes smarter.
+
+The domain module is still experimental. We are looking forward to your
+feedback, so please use it and let us know what you think.
+
+## Repl, Readline, TTY
+
+The Repl, Readline, and TTY modules have all had a major facelift. The
+interfaces between these three modules are cleaned up and refactored,
+removing a lot of common pain points and making it easier to use for
+debugging your programs.
+
+It may seem minor at times, but a good repl dramatically increases the
+quality of the overall experience. My personal favorites are:
+
+1. Typing `fs` or `net` or `path` will automatically load the module.
+2. Typing `npm install ...` will give you a helpful message.
+3. It doesn't do that stupid thing where long lines wrap and then the
+ backspace makes it get all confused and crazy. Instead of that, it
+ does the right thing.
+
+## Looking Forward
+
+Like other even-numbered version families before it, v0.8 will maintain
+API and ABI stability throughout its lifetime.
+
+The v0.6 release family will continue to see releases for critical
+bugfixes and security issues through the end of 2012. However, it will
+not be the main focus of the core team's attention.
+
+The v0.9 releases will start in the next couple weeks. The main focus
+of v0.9 will be:
+
+* The HTTP implementation - It has seen a lot of real-world use now, but
+ the http module is in dire need of a cleanup and refactor. Special
+ attention will be paid to making the interfaces more consistent,
+ improve performance, and increase correctness in more edge cases.
+* The Streams API - The concept of the Stream API is very core to node.
+ However, it is also (like HTTP) a feature that grew up organically,
+ and is now in need of a cleanup. It is currently too hard to get
+ right, especially regarding error handling.
+* Libuv Streams - The handle interfaces in libuv are going to be
+ refactored for added consistency throughout the codebase and across
+ platforms.
+
+Looking past that, there are a few areas where Node.js still has room
+for improvement in terms of internal consistency, idiomatic JavaScript
+usage, and performance. None of these are fully-fleshed out ideas yet,
+but these are some of the items on our radar:
+
+* We ought to move to TypedArrays in favor of Buffers. Buffers will
+ continue to work, but since TypedArray is a JavaScript native, it
+ makes sense to move towards that as the preferred API.
+* SSL performance leaves much to be desired at the moment. Node's
+ interface with OpenSSL is somewhat naive and leaves a lot of potential
+ optimization on the table.
+* The VM module needs massive improvement. It lacks features required
+ to emulate a web browser JavaScript context, which means that it is
+ inadequate.
+* The Crypto module still uses some very dated APIs. In 0.8, it can
+ accept Buffers for many things (finally!) but it still does not
+ present a Node-like streaming interface.
+
+At this point, the scope of Node's feature set is pretty much locked
+down. We may move things around internally for these cleanup tasks, but
+as you can see, there are no major new features planned. We've drawn
+our boundaries, and now it's time to continue focusing on improving
+stability and performance of the core, so that more innovation can
+happen in **your** programs.
+
+And now, for those of you who may be wondering what was added since
+v0.7.12, your regularly scheduled release announcement:
+
+## 2012.06.25, Version 0.8.0 (stable)
+
+* V8: upgrade to v3.11.10.10
+
+* npm: Upgrade to 1.1.32
+
+* Deprecate iowatcher (Ben Noordhuis)
+
+* windows: update icon (Bert Belder)
+
+* http: Hush 'MUST NOT have a body' warnings to debug() (isaacs)
+
+* Move blog.nodejs.org content into repository (isaacs)
+
+* Fix #3503: stdin: resume() on pipe(dest) (isaacs)
+
+* crypto: fix error reporting in SetKey() (Fedor Indutny)
+
+* Add --no-deprecation and --trace-deprecation command-line flags (isaacs)
+
+* fs: fix fs.watchFile() (Ben Noordhuis)
+
+* fs: Fix fs.readfile() on pipes (isaacs)
+
+* Rename GYP variable node_use_system_openssl to be consistent (Ryan Dahl)
+
+
+Source Code: http://nodejs.org/dist/v0.8.0/node-v0.8.0.tar.gz
+
+Macintosh Installer (Universal): http://nodejs.org/dist/v0.8.0/node-v0.8.0.pkg
+
+Windows Installer: http://nodejs.org/dist/v0.8.0/node-v0.8.0-x86.msi
+
+Windows x64 Installer: http://nodejs.org/dist/v0.8.0/x64/node-v0.8.0-x64.msi
+
+Windows x64 Files: http://nodejs.org/dist/v0.8.0/x64/
+
+Other release files: http://nodejs.org/dist/v0.8.0/
+
+Website: http://nodejs.org/docs/v0.8.0/
+
+Documentation: http://nodejs.org/docs/v0.8.0/api/
+
+Shasums:
+
+```
+b92208b291ad420025c65661a7df51fc618e21ca license.rtf
+0786bcda79bd651b9981682527a1bbabe0250700 node-v0.8.0-x86.msi
+8f160a742a01fdfc1b1423b3fc742d184f1ab70c node-v0.8.0-x86.wixpdb
+6035d6d59304add21e462cd7eb89491570b4970d node-v0.8.0.pkg
+5171fb46fbfee5ac7129c4b17207a3f35a1f57e8 node-v0.8.0.tar.gz
+742100a4ee4cd4d190031a30d9b22b2b69b6872e node.exe
+52d20d285e9aec53043af0843f5ecc4153210693 node.exp
+6d67a64274d844548cc6099c76181a50feafc233 node.lib
+aa2af08d5ab869e6c8b67f01ed67129c1cad8bce node.pdb
+b92208b291ad420025c65661a7df51fc618e21ca x64/license.rtf
+c4d4164d4f78ea68e0e2a85b96f9b355f3b1df8b x64/node-v0.8.0-x64.msi
+df8bb178ee4cb9562d93fe80bbe59b2acf1b9e6b x64/node-v0.8.0-x64.wixpdb
+fc07b475d943f7681e1904d6d7d666b41874a6fa x64/node.exe
+895002806dfb6d5bb141ef0f43cad3b540a4ff6c x64/node.exp
+686c60d5ae5dad7fcffcdc88049c63b2cd23cffc x64/node.lib
+75549cffab0c11107348a66ab0d94d4897bd6a27 x64/node.pdb
+```
+
+Edited by Tim Oxley to provide percentage differences in the
+benchmarks.
diff --git a/doc/blog/release/node-version-0-6-19-stable.md b/doc/blog/release/node-version-0-6-19-stable.md
new file mode 100644
index 00000000000..3a29556c578
--- /dev/null
+++ b/doc/blog/release/node-version-0-6-19-stable.md
@@ -0,0 +1,61 @@
+version: 0.6.19
+title: Node Version 0.6.19 (stable)
+author: Isaac Schlueter
+date: Wed Jun 06 2012 09:55:37 GMT-0700 (PDT)
+status: publish
+category: release
+slug: node-version-0-6-19-stable
+
+
2012.06.06 Version 0.6.19 (stable)
+
+
+
+
npm: upgrade to 1.1.24
+
+
fs: no end emit after createReadStream.pause() (Andreas Madsen)
tcp, pipe: don't assert on uv_accept() errors (Ben Noordhuis)
+
+
tls: Allow establishing secure connection on the existing socket (koichik)
+
+
dgram: handle close of dgram socket before DNS lookup completes (Seth Fitzsimmons)
+
+
windows: Support half-duplex pipes (Igor Zinkovsky)
+
+
build: disable omit-frame-pointer on solaris systems (Dave Pacheco)
+
+
debugger: fix --debug-brk (Ben Noordhuis)
+
+
net: fix large file downloads failing (koichik)
+
+
fs: fix ReadStream failure to read from existing fd (Christopher Jeffrey)
+
+
net: destroy socket on DNS error (Stefan Rusu)
+
+
dtrace: add missing translator (Dave Pacheco)
+
+
unix: don't flush tty on switch to raw mode (Ben Noordhuis)
+
+
windows: reset brightness when reverting to default text color (Bert Belder)
+
+
npm: update to 1.1.1
+
+
- Update which, fstream, mkdirp, request, and rimraf - Fix #2123 Set path properly for lifecycle scripts on windows - Mark the root as seen, so we don't recurse into it. Fixes #1838. (Martin Cooper)
diff --git a/doc/blog/release/version-0-6-12-stable.md b/doc/blog/release/version-0-6-12-stable.md
new file mode 100644
index 00000000000..bd20ef7da9f
--- /dev/null
+++ b/doc/blog/release/version-0-6-12-stable.md
@@ -0,0 +1,66 @@
+version: 0.6.12
+title: Version 0.6.12 (stable)
+author: Isaac Schlueter
+date: Fri Mar 02 2012 13:22:49 GMT-0800 (PST)
+status: publish
+category: release
+slug: version-0-6-12-stable
+
+
2012.03.02 Version 0.6.12 (stable)
+
+
+
+
Upgrade V8 to 3.6.6.24
+
+
dtrace ustack helper improvements (Dave Pacheco)
+
+
API Documentation refactor (isaacs)
+
+
#2827 net: fix race write() before and after connect() (koichik)
+
+
#2554 #2567 throw if fs args for 'start' or 'end' are strings (AJ ONeal)
+
+
punycode: Update to v1.0.0 (Mathias Bynens)
+
+
Make a fat binary for the OS X pkg (isaacs)
+
+
Fix hang on accessing process.stdin (isaacs)
+
+
repl: make tab completion work on non-objects (Nathan Rajlich)
+
+
Fix fs.watch on OS X (Ben Noordhuis)
+
+
Fix #2515 nested setTimeouts cause premature process exit (Ben Noordhuis)
+
+
windows: fix time conversion in stat (Igor Zinkovsky)
+
+
windows: fs: handle EOF in read (Brandon Philips)
+
+
windows: avoid IOCP short-circuit on non-ifs lsps (Igor Zinkovsky)
+
+
Upgrade npm to 1.1.4 (isaacs)
+
+- windows fixes
+- Bundle nested bundleDependencies properly
+- install: support --save with url install targets
+- shrinkwrap: behave properly with url-installed modules
+- support installing uncompressed tars or single file modules from urls etc.
+- don't run make clean on rebuild
+- support HTTPS-over-HTTP proxy tunneling
+
diff --git a/doc/blog/release/version-0-6-13-stable.md b/doc/blog/release/version-0-6-13-stable.md
new file mode 100644
index 00000000000..2561c1df46f
--- /dev/null
+++ b/doc/blog/release/version-0-6-13-stable.md
@@ -0,0 +1,50 @@
+version: 0.6.13
+title: Version 0.6.13 (stable)
+author: Isaac Schlueter
+date: Thu Mar 15 2012 10:37:02 GMT-0700 (PDT)
+status: publish
+category: release
+slug: version-0-6-13-stable
+
+
2012.03.15 Version 0.6.13 (stable)
+
+
+
+
Windows: Many libuv test fixes (Bert Belder)
+
+
Windows: avoid uv_guess_handle crash in when fd < 0 (Bert Belder)
+
+
Map EBUSY and ENOTEMPTY errors (Bert Belder)
+
+
Windows: include syscall in fs errors (Bert Belder)
+
+
Fix fs.watch ENOSYS on Linux kernel version mismatch (Ben Noordhuis)
+
+
Update npm to 1.1.9
+
+- upgrade node-gyp to 0.3.5 (Nathan Rajlich)
+- Fix isaacs/npm#2249 Add cache-max and cache-min configs
+- Properly redirect across https/http registry requests
+- log config usage if undefined key in set function (Kris Windham)
+- Add support for os/cpu fields in package.json (Adam Blackburn)
+- Automatically node-gyp packages containing a binding.gyp
+- Fix failures unpacking in UNC shares
+- Never create un-listable directories
+- Handle cases where an optionalDependency fails to build
+
diff --git a/doc/blog/release/version-0-6-14-stable.md b/doc/blog/release/version-0-6-14-stable.md
new file mode 100644
index 00000000000..ba5183414c6
--- /dev/null
+++ b/doc/blog/release/version-0-6-14-stable.md
@@ -0,0 +1,55 @@
+version: 0.6.14
+title: Version 0.6.14 (stable)
+author: Isaac Schlueter
+date: Fri Mar 23 2012 11:22:22 GMT-0700 (PDT)
+status: publish
+category: release
+slug: version-0-6-14-stable
+
+
2012.03.22 Version 0.6.14 (stable)
+
+
+
+
net: don't crash when queued write fails (Igor Zinkovsky)
+
+
sunos: fix EMFILE on process.memoryUsage() (Bryan Cantrill)
+
+
crypto: fix compile-time error with openssl 0.9.7e (Ben Noordhuis)
+
+
unix: ignore ECONNABORTED errors from accept() (Ben Noordhuis)
+
+
Add UV_ENOSPC and mappings to it (Bert Belder)
+
+
http-parser: Fix response body is not read (koichik)
+
+
Upgrade npm to 1.1.12
+
+- upgrade node-gyp to 0.3.7
+- work around AV-locked directories on Windows
+- Fix isaacs/npm#2293 Don't try to 'uninstall' /
+- Exclude symbolic links from packages.
+- Fix isaacs/npm#2275 Spurious 'unresolvable cycle' error.
+- Exclude/include dot files as if they were normal files
+
diff --git a/doc/blog/release/version-0-7-10-unstable.md b/doc/blog/release/version-0-7-10-unstable.md
new file mode 100644
index 00000000000..3e149111478
--- /dev/null
+++ b/doc/blog/release/version-0-7-10-unstable.md
@@ -0,0 +1,86 @@
+version: 0.7.10
+title: Version 0.7.10 (unstable)
+author: Isaac Schlueter
+date: Mon Jun 11 2012 09:00:25 GMT-0700 (PDT)
+status: publish
+category: release
+slug: version-0-7-10-unstable
+
+
2012.06.11, Version 0.7.10 (unstable)
+
+
+
This is the second-to-last release on the 0.7 branch. Version 0.8.0
+will be released some time next week. As other even-numbered Node
+releases before it, the v0.8.x releases will maintain API and binary
+compatibility.
+
+
Please try out this release. There will be very few changes between
+this and the v0.8.x release family. This is the last chance to comment
+on the API before it is locked down for stability.
+
+
+
+
+
Roll V8 back to 3.9.24.31
+
+
build: x64 target should always pass -m64 (Robert Mustacchi)
+
+
add NODE_EXTERN to node::Start (Joel Brandt)
+
+
repl: Warn about running npm commands (isaacs)
+
+
slab_allocator: fix crash in dtor if V8 is dead (Ben Noordhuis)
+
+
slab_allocator: fix leak of Persistent handles (Shigeki Ohtsu)
+
+
windows/msi: add node.js prompt to startmenu (Jeroen Janssen)
+
+
windows/msi: fix adding node to PATH (Jeroen Janssen)
+
+
windows/msi: add start menu links when installing (Jeroen Janssen)
+
+
windows: don't install x64 version into the 'program files (x86)' folder (Matt Gollob)
+
+
domain: Fix #3379 domain.intercept no longer passes error arg to cb (Marc Harter)
+
+
fs: make callbacks run in global context (Ben Noordhuis)
+
+
fs: enable fs.realpath on windows (isaacs)
+
+
child_process: expose UV_PROCESS_DETACHED as options.detached (Charlie McConnell)
+
+
child_process: new stdio API for .spawn() method (Fedor Indutny)
+
+
child_process: spawn().ref() and spawn().unref() (Fedor Indutny)
Please try out this release. There will be very virtually no changes between this and the v0.8.x release family. This is the last chance to comment before it is locked down for stability. The API is effectively frozen now.
+
This version adds backwards-compatible shims for binary addons that use libeio and libev directly. If you find that binary modules that could compile on v0.6 can not compile on this version, please let us know. Note that libev is officially deprecated in v0.8, and will be removed in v0.9. You should be porting your modules to use libuv as soon as possible.
+
V8 is on 3.11.10 currently, and will remain on the V8 3.11.x branch for the duration of Node v0.8.x.
+
npm: Upgrade to 1.1.30 - Improved 'npm init' - Fix the 'cb never called' error from 'oudated' and 'update' - Add --save-bundle|-B config - Fix isaacs/npm#2465: Make npm script and windows shims cygwin-aware - Fix isaacs/npm#2452 Use --save(-dev|-optional) in npm rm - logstream option to replace removed logfd (Rod Vagg) - Read default descriptions from README.md files
+
Shims to support deprecated ev_* and eio_* methods (Ben Noordhuis)
+
#3118 net.Socket: Delay pause/resume until after connect (isaacs)
+
#3465 Add ./configure --no-ifaddrs flag (isaacs)
+
child_process: add .stdin stream to forks (Fedor Indutny)
+
build: fix make install DESTDIR=/path (Ben Noordhuis)
+
tls: fix off-by-one error in renegotiation check (Ben Noordhuis)
diff --git a/doc/blog/release/version-0-7-6-unstable.md b/doc/blog/release/version-0-7-6-unstable.md
new file mode 100644
index 00000000000..ecf67f3ea06
--- /dev/null
+++ b/doc/blog/release/version-0-7-6-unstable.md
@@ -0,0 +1,72 @@
+version: 0.7.6
+title: Version 0.7.6 (unstable)
+author: Isaac Schlueter
+date: Tue Mar 13 2012 14:12:30 GMT-0700 (PDT)
+status: publish
+category: release
+slug: version-0-7-6-unstable
+
+
2012.03.13, Version 0.7.6 (unstable)
+
+
+
+
Upgrade v8 to 3.9.17
+
+
Upgrade npm to 1.1.8
+
+- Add support for os/cpu fields in package.json (Adam Blackburn)
+- Automatically node-gyp packages containing a binding.gyp
+- Fix failures unpacking in UNC shares
+- Never create un-listable directories
+- Handle cases where an optionalDependency fails to build
+
+
+
events: newListener emit correct fn when using 'once' (Roly Fentanes)
+
+
url: Ignore empty port component (Łukasz Walukiewicz)
+
+
module: replace 'children' array (isaacs)
+
+
tls: parse multiple values of a key in ssl certificate (Sambasiva Suda)
+
+
cluster: support passing of named pipes (Ben Noordhuis)
+
+
Windows: include syscall in fs errors (Bert Belder)
+
+
http: #2888 Emit end event only once (Igor Zinkovsky)
diff --git a/doc/blog/video/bryan-cantrill-instrumenting-the-real-time-web.md b/doc/blog/video/bryan-cantrill-instrumenting-the-real-time-web.md
new file mode 100644
index 00000000000..0c26cb7a493
--- /dev/null
+++ b/doc/blog/video/bryan-cantrill-instrumenting-the-real-time-web.md
@@ -0,0 +1,42 @@
+title: Bryan Cantrill: Instrumenting the Real Time Web
+author: Isaac Schlueter
+date: Tue May 08 2012 10:00:34 GMT-0700 (PDT)
+status: publish
+category: video
+slug: bryan-cantrill-instrumenting-the-real-time-web
+
+Bryan Cantrill, VP of Engineering at Joyent, describes the challenges of instrumenting a distributed, dynamic, highly virtualized system -- and what their experiences taught them about the problem, the technologies used to tackle it, and promising approaches.
+
+This talk was given at Velocity Conf in 2011.
+
+
diff --git a/doc/blog/video/welcome-to-the-node-blog.md b/doc/blog/video/welcome-to-the-node-blog.md
new file mode 100644
index 00000000000..3ac39858326
--- /dev/null
+++ b/doc/blog/video/welcome-to-the-node-blog.md
@@ -0,0 +1,13 @@
+title: Welcome to the Node blog
+author: ryandahl
+date: Thu Mar 17 2011 20:17:12 GMT-0700 (PDT)
+status: publish
+category: video
+slug: welcome-to-the-node-blog
+
+Since Livejournal is disintegrating into Russian spam, I'm moving my technical blog to http://blog.nodejs.org/. I hope to do frequent small posts about what's going on in Node development and include posts from other core Node developers. Please subscribe to the RSS feed.
+
+To avoid making this post completely devoid of content, here is a new video from a talk I gave at the SF PHP group tastefully produced by Marakana:
+
diff --git a/doc/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md b/doc/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md
new file mode 100644
index 00000000000..22a8c71922d
--- /dev/null
+++ b/doc/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md
@@ -0,0 +1,45 @@
+title: HTTP Server Security Vulnerability: Please upgrade to 0.6.17
+author: Isaac Schlueter
+date: Mon May 07 2012 10:02:01 GMT-0700 (PDT)
+status: publish
+category: vulnerability
+slug: http-server-security-vulnerability-please-upgrade-to-0-6-17
+
+
tl;dr
+
+
A carefully crafted attack request can cause the contents of the HTTP parser's buffer to be appended to the attacking request's header, making it appear to come from the attacker. Since it is generally safe to echo back contents of a request, this can allow an attacker to get an otherwise correctly designed server to divulge information about other requests. It is theoretically possible that it could enable header-spoofing attacks, though such an attack has not been demonstrated.
+
Versions affected: All versions of the 0.5/0.6 branch prior to 0.6.17, and all versions of the 0.7 branch prior to 0.7.8. Versions in the 0.4 branch are not affected.
+
Fix: Upgrade to v0.6.17, or apply the fix in c9a231d to your system.
+
+
Details
+
+
A few weeks ago, Matthew Daley found a security vulnerability in Node's HTTP implementation, and thankfully did the responsible thing and reported it to us via email. He explained it quite well, so I'll quote him here:
+
+
There is a vulnerability in node's `http_parser` binding which allows information disclosure to a remote attacker:
+
+
+
In node::StringPtr::Update, an attempt is made at an optimization on certain inputs (`node_http_parser.cc`, line 151). The intent is that if the current string pointer plus the current string size is equal to the incoming string pointer, the current string size is just increased to match, as the incoming string lies just beyond the current string pointer. However, the check to see whether or not this can be done is incorrect; "size" is used whereas "size_" should be used. Therefore, an attacker can call Update with a string of certain length and cause the current string to have other data appended to it. In the case of HTTP being parsed out of incoming socket data, this can be incoming data from other sockets.
+
+
+
Normally node::StringPtr::Save, which is called after each execution of `http_parser`, would stop this from being exploitable as it converts strings to non-optimizable heap-based strings. However, this is not done to 0-length strings. An attacker can therefore exploit the mistake by making Update set a 0-length string, and then Update past its boundary, so long as it is done in one `http_parser` execution. This can be done with an HTTP header with empty value, followed by a continuation with a value of certain length.
+
+
$ ./node ~/stringptr-update-poc-server.js &
+[1] 11801
+$ ~/stringptr-update-poc-client.py
+HTTP/1.1 200 OK
+Content-Type: text/plain
+Date: Wed, 18 Apr 2012 00:05:11 GMT
+Connection: close
+Transfer-Encoding: chunked
+
+64
+X header:
+ This is private data, perhaps an HTTP request with a Cookie in it.
+0
+
+
The fix landed on 7b3fb22 and c9a231d, for master and v0.6, respectively. The innocuous commit message does not give away the security implications, precisely because we wanted to get a fix out before making a big deal about it.
+
The first releases with the fix are v0.7.8 and 0.6.17. So now is a good time to make a big deal about it.
+
If you are using node version 0.6 in production, please upgrade to at least v0.6.17, or at least apply the fix in c9a231d to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.)
+
I'm extremely grateful that Matthew took the time to report the problem to us with such an elegant explanation, and in such a way that we had a reasonable amount of time to fix the issue before making it public.