Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

website: move website to joyent/node-website

The website will no longer be living in the source repository instead
it can be found at http://github.com/joyent/node-website
  • Loading branch information...
commit 37376debe54ccd2889174ebb8ffc3949e0bda298 1 parent b222374
@tjfontaine tjfontaine authored
Showing with 4 additions and 21,636 deletions.
  1. +4 −31 Makefile
  2. +0 −147 doc/about/index.html
  3. +0 −241 doc/blog.html
  4. +0 −28 doc/blog/README.md
  5. +0 −16 doc/blog/Uncategorized/an-easy-way-to-build-scalable-network-programs.md
  6. +0 −17 doc/blog/Uncategorized/bnoordhuis-departure.md
  7. +0 −25 doc/blog/Uncategorized/development-environment.md
  8. +0 −34 doc/blog/Uncategorized/evolving-the-node-js-brand.md
  9. +0 −12 doc/blog/Uncategorized/growing-up.md
  10. +0 −14 doc/blog/Uncategorized/jobs-nodejs-org.md
  11. +0 −84 doc/blog/Uncategorized/ldapjs-a-reprise-of-ldap.md
  12. +0 −45 doc/blog/Uncategorized/libuv-status-report.md
  13. +0 −11 doc/blog/Uncategorized/node-meetup-this-thursday.md
  14. +0 −12 doc/blog/Uncategorized/node-office-hours-cut-short.md
  15. +0 −12 doc/blog/Uncategorized/office-hours.md
  16. +0 −12 doc/blog/Uncategorized/porting-node-to-windows-with-microsoft%e2%80%99s-help.md
  17. +0 −60 doc/blog/Uncategorized/profiling-node-js.md
  18. +0 −13 doc/blog/Uncategorized/some-new-node-projects.md
  19. +0 −10 doc/blog/Uncategorized/the-videos-from-node-meetup.md
  20. +0 −54 doc/blog/Uncategorized/tj-fontaine-new-node-lead.md
  21. +0 −17 doc/blog/Uncategorized/trademark.md
  22. +0 −23 doc/blog/Uncategorized/version-0-6.md
  23. BIN  doc/blog/favicon.ico
  24. +0 −851 doc/blog/feature/streams2.md
  25. +0 −89 doc/blog/module/multi-server-continuous-deployment-with-fleet.md
  26. +0 −337 doc/blog/module/service-logging-in-json-with-bunyan.md
  27. +0 −52 doc/blog/nodejs-road-ahead.md
  28. +0 −82 doc/blog/npm/2013-outage-postmortem.md
  29. +0 −167 doc/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md
  30. +0 −64 doc/blog/npm/npm-1-0-global-vs-local-installation.md
  31. +0 −114 doc/blog/npm/npm-1-0-link.md
  32. +0 −36 doc/blog/npm/npm-1-0-released.md
  33. +0 −144 doc/blog/npm/npm-1-0-the-new-ls.md
  34. +0 −134 doc/blog/npm/peer-dependencies.md
  35. +0 −43 doc/blog/release/0.6.21.md
  36. +0 −25 doc/blog/release/node-v0-4-10.md
  37. +0 −39 doc/blog/release/node-v0-4-11.md
  38. +0 −29 doc/blog/release/node-v0-4-12.md
  39. +0 −33 doc/blog/release/node-v0-4-3.md
  40. +0 −27 doc/blog/release/node-v0-4-4.md
  41. +0 −29 doc/blog/release/node-v0-4-5.md
  42. +0 −27 doc/blog/release/node-v0-4-6.md
  43. +0 −23 doc/blog/release/node-v0-4-7.md
  44. +0 −55 doc/blog/release/node-v0-4-8.md
  45. +0 −30 doc/blog/release/node-v0-4-9.md
  46. +0 −39 doc/blog/release/node-v0-5-0-unstable.md
  47. +0 −30 doc/blog/release/node-v0-5-1.md
  48. +0 −41 doc/blog/release/node-v0-5-10.md
  49. +0 −27 doc/blog/release/node-v0-5-2.md
  50. +0 −53 doc/blog/release/node-v0-5-3.md
  51. +0 −36 doc/blog/release/node-v0-5-4.md
  52. +0 −40 doc/blog/release/node-v0-5-5.md
  53. +0 −49 doc/blog/release/node-v0-5-6.md
  54. +0 −35 doc/blog/release/node-v0-5-7-unstable.md
  55. +0 −26 doc/blog/release/node-v0-5-8.md
  56. +0 −27 doc/blog/release/node-v0-5-9.md
  57. +0 −80 doc/blog/release/node-v0-6-0.md
  58. +0 −33 doc/blog/release/node-v0-6-1.md
  59. +0 −30 doc/blog/release/node-v0-6-10.md
  60. +0 −27 doc/blog/release/node-v0-6-2.md
  61. +0 −30 doc/blog/release/node-v0-6-3.md
  62. +0 −31 doc/blog/release/node-v0-6-4.md
  63. +0 −21 doc/blog/release/node-v0-6-5.md
  64. +0 −32 doc/blog/release/node-v0-6-6.md
  65. +0 −41 doc/blog/release/node-v0-6-7.md
  66. +0 −31 doc/blog/release/node-v0-6-8.md
  67. +0 −30 doc/blog/release/node-v0-6-9.md
  68. +0 −29 doc/blog/release/node-v0-7-0-unstable.md
  69. +0 −30 doc/blog/release/node-v0-7-1.md
  70. +0 −32 doc/blog/release/node-v0-7-2-unstable.md
  71. +0 −39 doc/blog/release/node-v0-7-3.md
  72. +0 −384 doc/blog/release/node-v0.8.0.md
  73. +0 −61 doc/blog/release/node-version-0-6-19-stable.md
  74. +0 −73 doc/blog/release/node-version-0-7-9-unstable.md
  75. +0 −487 doc/blog/release/v0.10.0.md
  76. +0 −82 doc/blog/release/v0.10.1.md
  77. +0 −63 doc/blog/release/v0.10.10.md
  78. +0 −70 doc/blog/release/v0.10.11.md
  79. +0 −67 doc/blog/release/v0.10.12.md
  80. +0 −76 doc/blog/release/v0.10.13.md
  81. +0 −72 doc/blog/release/v0.10.14.md
  82. +0 −58 doc/blog/release/v0.10.15.md
  83. +0 −74 doc/blog/release/v0.10.16.md
  84. +0 −92 doc/blog/release/v0.10.17.md
  85. +0 −62 doc/blog/release/v0.10.18.md
  86. +0 −70 doc/blog/release/v0.10.19.md
  87. +0 −87 doc/blog/release/v0.10.2.md
  88. +0 −59 doc/blog/release/v0.10.20.md
  89. +0 −71 doc/blog/release/v0.10.21.md
  90. +0 −74 doc/blog/release/v0.10.22.md
  91. +0 −86 doc/blog/release/v0.10.23.md
  92. +0 −68 doc/blog/release/v0.10.24.md
  93. +0 −72 doc/blog/release/v0.10.25.md
  94. +0 −77 doc/blog/release/v0.10.3.md
  95. +0 −85 doc/blog/release/v0.10.4.md
  96. +0 −73 doc/blog/release/v0.10.5.md
  97. +0 −65 doc/blog/release/v0.10.6.md
  98. +0 −65 doc/blog/release/v0.10.7.md
  99. +0 −75 doc/blog/release/v0.10.8.md
  100. +0 −67 doc/blog/release/v0.10.9.md
  101. +0 −86 doc/blog/release/v0.11.0.md
  102. +0 −75 doc/blog/release/v0.11.1.md
  103. +0 −106 doc/blog/release/v0.11.10.md
  104. +0 −106 doc/blog/release/v0.11.11.md
  105. +0 −87 doc/blog/release/v0.11.2.md
  106. +0 −101 doc/blog/release/v0.11.3.md
  107. +0 −88 doc/blog/release/v0.11.4.md
  108. +0 −88 doc/blog/release/v0.11.5.md
  109. +0 −110 doc/blog/release/v0.11.6.md
  110. +0 −92 doc/blog/release/v0.11.7.md
  111. +0 −96 doc/blog/release/v0.11.8.md
  112. +0 −92 doc/blog/release/v0.11.9.md
  113. +0 −51 doc/blog/release/v0.6.20.md
  114. +0 −77 doc/blog/release/v0.8.1.md
  115. +0 −93 doc/blog/release/v0.8.10.md
  116. +0 −63 doc/blog/release/v0.8.11.md
  117. +0 −73 doc/blog/release/v0.8.12.md
  118. +0 −75 doc/blog/release/v0.8.13.md
  119. +0 −80 doc/blog/release/v0.8.14.md
  120. +0 −71 doc/blog/release/v0.8.15.md
  121. +0 −65 doc/blog/release/v0.8.16.md
  122. +0 −80 doc/blog/release/v0.8.17.md
  123. +0 −67 doc/blog/release/v0.8.18.md
  124. +0 −72 doc/blog/release/v0.8.19.md
  125. +0 −73 doc/blog/release/v0.8.2.md
  126. +0 −62 doc/blog/release/v0.8.20.md
  127. +0 −86 doc/blog/release/v0.8.21.md
  128. +0 −62 doc/blog/release/v0.8.22.md
  129. +0 −69 doc/blog/release/v0.8.23.md
  130. +0 −63 doc/blog/release/v0.8.24.md
  131. +0 −59 doc/blog/release/v0.8.25.md
  132. +0 −71 doc/blog/release/v0.8.26.md
  133. +0 −71 doc/blog/release/v0.8.3.md
  134. +0 −59 doc/blog/release/v0.8.4.md
  135. +0 −77 doc/blog/release/v0.8.5.md
  136. +0 −85 doc/blog/release/v0.8.6.md
  137. +0 −75 doc/blog/release/v0.8.7.md
  138. +0 −77 doc/blog/release/v0.8.8.md
  139. +0 −97 doc/blog/release/v0.8.9.md
  140. +0 −61 doc/blog/release/v0.9.0.md
  141. +0 −116 doc/blog/release/v0.9.1.md
  142. +0 −84 doc/blog/release/v0.9.10.md
  143. +0 −88 doc/blog/release/v0.9.11.md
  144. +0 −107 doc/blog/release/v0.9.12.md
  145. +0 −95 doc/blog/release/v0.9.2.md
  146. +0 −87 doc/blog/release/v0.9.3.md
  147. +0 −97 doc/blog/release/v0.9.4.md
  148. +0 −82 doc/blog/release/v0.9.5.md
  149. +0 −87 doc/blog/release/v0.9.6.md
  150. +0 −81 doc/blog/release/v0.9.7.md
  151. +0 −76 doc/blog/release/v0.9.8.md
  152. +0 −64 doc/blog/release/version-0-6-11-stable.md
  153. +0 −66 doc/blog/release/version-0-6-12-stable.md
  154. +0 −50 doc/blog/release/version-0-6-13-stable.md
  155. +0 −55 doc/blog/release/version-0-6-14-stable.md
  156. +0 −53 doc/blog/release/version-0-6-15-stable.md
  157. +0 −59 doc/blog/release/version-0-6-16-stable.md
  158. +0 −47 doc/blog/release/version-0-6-17-stable.md
  159. +0 −59 doc/blog/release/version-0-6-18-stable.md
  160. +0 −86 doc/blog/release/version-0-7-10-unstable.md
  161. +0 −80 doc/blog/release/version-0-7-11-unstable.md
  162. +0 −56 doc/blog/release/version-0-7-12.md
  163. +0 −50 doc/blog/release/version-0-7-4-unstable.md
  164. +0 −62 doc/blog/release/version-0-7-5-unstable.md
  165. +0 −72 doc/blog/release/version-0-7-6-unstable.md
  166. +0 −71 doc/blog/release/version-0-7-7-unstable.md
  167. +0 −71 doc/blog/release/version-0-7-8-unstable.md
  168. +0 −78 doc/blog/v0.9.9.md
  169. +0 −13 doc/blog/video/bert-belder-libuv-lxjs-2012.md
  170. +0 −42 doc/blog/video/bryan-cantrill-instrumenting-the-real-time-web.md
  171. +0 −13 doc/blog/video/welcome-to-the-node-blog.md
  172. +0 −37 doc/blog/vulnerability/http-server-pipeline-flood-dos.md
  173. +0 −45 doc/blog/vulnerability/http-server-security-vulnerability-please-upgrade-to-0-6-17.md
  174. +0 −193 doc/cla.html
  175. +0 −263 doc/community/index.html
  176. +0 −190 doc/download/index.html
  177. BIN  doc/favicon.ico
  178. BIN  doc/full-white-stripe.jpg
  179. BIN  doc/images/anchor.png
  180. BIN  doc/images/close-downloads.png
  181. BIN  doc/images/community-icons.png
  182. BIN  doc/images/download-logo.png
  183. BIN  doc/images/ebay-logo.png
  184. BIN  doc/images/footer-logo-alt.png
  185. BIN  doc/images/footer-logo.png
  186. BIN  doc/images/forkme.png
  187. BIN  doc/images/home-icons.png
  188. BIN  doc/images/icons-interior.png
  189. BIN  doc/images/icons.png
  190. BIN  doc/images/joyent-logo_orange_nodeorg-01.png
  191. BIN  doc/images/linkedin-logo.png
  192. BIN  doc/images/logo-light.png
  193. BIN  doc/images/logo.png
  194. BIN  doc/images/logos/monitor.png
  195. BIN  doc/images/logos/node-favicon.png
  196. BIN  doc/images/logos/nodejs-1024x768.png
  197. BIN  doc/images/logos/nodejs-1280x1024.png
  198. BIN  doc/images/logos/nodejs-1440x900.png
  199. BIN  doc/images/logos/nodejs-1920x1200.png
  200. BIN  doc/images/logos/nodejs-2560x1440.png
  201. BIN  doc/images/logos/nodejs-black.eps
  202. BIN  doc/images/logos/nodejs-black.png
  203. BIN  doc/images/logos/nodejs-dark.eps
  204. BIN  doc/images/logos/nodejs-dark.png
  205. BIN  doc/images/logos/nodejs-green.eps
  206. BIN  doc/images/logos/nodejs-green.png
  207. BIN  doc/images/logos/nodejs-light.eps
  208. BIN  doc/images/logos/nodejs.png
  209. BIN  doc/images/microsoft-logo.png
  210. BIN  doc/images/not-invented-here.png
  211. BIN  doc/images/platform-icon-generic.png
  212. BIN  doc/images/platform-icon-osx.png
  213. BIN  doc/images/platform-icon-win.png
  214. BIN  doc/images/platform-icons.png
  215. BIN  doc/images/ryan-speaker.jpg
  216. BIN  doc/images/sponsored.png
  217. BIN  doc/images/twitter-bird.png
  218. BIN  doc/images/yahoo-logo.png
  219. +0 −199 doc/index.html
  220. +0 −93 doc/logos/index.html
  221. BIN  doc/mac_osx_nodejs_installer_logo.png
  222. +0 −448 doc/node.1
  223. +0 −668 doc/pipe.css
  224. +0 −6 doc/robots.txt
  225. +0 −45 doc/rss.xml
  226. BIN  doc/thin-white-stripe.jpg
  227. BIN  doc/trademark-policy.pdf
  228. +0 −98 doc/v0.4_announcement.html
  229. +0 −5 tools/blog/README.md
  230. +0 −270 tools/blog/generate.js
  231. 0  tools/blog/node_modules/ejs/.gitmodules
  232. +0 −4 tools/blog/node_modules/ejs/.npmignore
  233. +0 −98 tools/blog/node_modules/ejs/History.md
  234. +0 −23 tools/blog/node_modules/ejs/Makefile
  235. +0 −151 tools/blog/node_modules/ejs/Readme.md
  236. +0 −14 tools/blog/node_modules/ejs/benchmark.js
  237. +0 −567 tools/blog/node_modules/ejs/ejs.js
  238. +0 −2  tools/blog/node_modules/ejs/ejs.min.js
  239. +0 −24 tools/blog/node_modules/ejs/examples/client.html
  240. +0 −7 tools/blog/node_modules/ejs/examples/list.ejs
  241. +0 −14 tools/blog/node_modules/ejs/examples/list.js
  242. +0 −2  tools/blog/node_modules/ejs/index.js
  243. +0 −298 tools/blog/node_modules/ejs/lib/ejs.js
  244. +0 −198 tools/blog/node_modules/ejs/lib/filters.js
  245. +0 −23 tools/blog/node_modules/ejs/lib/utils.js
  246. +0 −23 tools/blog/node_modules/ejs/package.json
  247. +0 −174 tools/blog/node_modules/ejs/support/compile.js
  248. +0 −329 tools/blog/node_modules/ejs/test/ejs.test.js
  249. +0 −1  tools/blog/node_modules/ejs/test/fixtures/user.ejs
  250. +0 −2  tools/blog/node_modules/glob/.npmignore
  251. +0 −4 tools/blog/node_modules/glob/.travis.yml
  252. +0 −25 tools/blog/node_modules/glob/LICENCE
  253. +0 −27 tools/blog/node_modules/glob/LICENSE
  254. +0 −233 tools/blog/node_modules/glob/README.md
  255. +0 −9 tools/blog/node_modules/glob/examples/g.js
  256. +0 −9 tools/blog/node_modules/glob/examples/usr-local.js
  257. +0 −601 tools/blog/node_modules/glob/glob.js
  258. +0 −1  tools/blog/node_modules/glob/node_modules/graceful-fs/.npmignore
  259. +0 −23 tools/blog/node_modules/glob/node_modules/graceful-fs/LICENSE
  260. +0 −5 tools/blog/node_modules/glob/node_modules/graceful-fs/README.md
  261. +0 −275 tools/blog/node_modules/glob/node_modules/graceful-fs/graceful-fs.js
  262. +0 −22 tools/blog/node_modules/glob/node_modules/graceful-fs/package.json
  263. +0 −41 tools/blog/node_modules/glob/node_modules/graceful-fs/test/open.js
  264. +0 −51 tools/blog/node_modules/glob/node_modules/inherits/README.md
  265. +0 −29 tools/blog/node_modules/glob/node_modules/inherits/inherits.js
  266. +0 −25 tools/blog/node_modules/glob/node_modules/inherits/package.json
  267. +0 −4 tools/blog/node_modules/glob/node_modules/minimatch/.travis.yml
  268. +0 −23 tools/blog/node_modules/glob/node_modules/minimatch/LICENSE
  269. +0 −218 tools/blog/node_modules/glob/node_modules/minimatch/README.md
  270. +0 −1,052 tools/blog/node_modules/glob/node_modules/minimatch/minimatch.js
  271. +0 −1  tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/.npmignore
  272. +0 −5 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/AUTHORS
  273. +0 −23 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/LICENSE
  274. +0 −26 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/README.md
  275. +0 −156 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/lib/lru-cache.js
  276. +0 −45 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/package.json
  277. +0 −171 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/lru-cache/test/basic.js
  278. +0 −27 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/LICENSE
  279. +0 −53 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/README.md
  280. +0 −283 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/bench.js
  281. +0 −38 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/package.json
  282. +0 −39 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/sigmund.js
  283. +0 −24 tools/blog/node_modules/glob/node_modules/minimatch/node_modules/sigmund/test/basic.js
  284. +0 −36 tools/blog/node_modules/glob/node_modules/minimatch/package.json
  285. +0 −273 tools/blog/node_modules/glob/node_modules/minimatch/test/basic.js
  286. +0 −33 tools/blog/node_modules/glob/node_modules/minimatch/test/brace-expand.js
  287. +0 −14 tools/blog/node_modules/glob/node_modules/minimatch/test/caching.js
  288. +0 −274 tools/blog/node_modules/glob/node_modules/minimatch/test/defaults.js
  289. +0 −35 tools/blog/node_modules/glob/package.json
  290. +0 −61 tools/blog/node_modules/glob/test/00-setup.js
  291. +0 −119 tools/blog/node_modules/glob/test/bash-comparison.js
  292. +0 −55 tools/blog/node_modules/glob/test/cwd-test.js
  293. +0 −63 tools/blog/node_modules/glob/test/mark.js
  294. +0 −98 tools/blog/node_modules/glob/test/pause-resume.js
  295. +0 −39 tools/blog/node_modules/glob/test/root-nomount.js
  296. +0 −43 tools/blog/node_modules/glob/test/root.js
  297. +0 −11 tools/blog/node_modules/glob/test/zz-cleanup.js
  298. +0 −2  tools/blog/node_modules/marked/.npmignore
  299. +0 −19 tools/blog/node_modules/marked/LICENSE
  300. +0 −9 tools/blog/node_modules/marked/Makefile
Sorry, we could not display the entire diff because too many files (333) changed.
View
35 Makefile
@@ -129,36 +129,15 @@ apidoc_sources = $(wildcard doc/api/*.markdown)
apidocs = $(addprefix out/,$(apidoc_sources:.markdown=.html)) \
$(addprefix out/,$(apidoc_sources:.markdown=.json))
-apidoc_dirs = out/doc out/doc/api/ out/doc/api/assets out/doc/about out/doc/community out/doc/download out/doc/logos out/doc/images
+apidoc_dirs = out/doc out/doc/api/ out/doc/api/assets
apiassets = $(subst api_assets,api/assets,$(addprefix out/,$(wildcard doc/api_assets/*)))
-doc_images = $(addprefix out/,$(wildcard doc/images/* doc/*.jpg doc/*.png))
-
website_files = \
- out/doc/index.html \
- out/doc/v0.4_announcement.html \
- out/doc/cla.html \
out/doc/sh_main.js \
- out/doc/sh_javascript.min.js \
- out/doc/sh_vim-dark.css \
- out/doc/sh.css \
- out/doc/favicon.ico \
- out/doc/pipe.css \
- out/doc/about/index.html \
- out/doc/community/index.html \
- out/doc/download/index.html \
- out/doc/logos/index.html \
- out/doc/changelog.html \
- $(doc_images)
-
-doc: $(apidoc_dirs) $(website_files) $(apiassets) $(apidocs) tools/doc/ blog node
-
-blogclean:
- rm -rf out/blog
-
-blog: doc/blog out/Release/node tools/blog
- out/Release/node tools/blog/generate.js doc/blog/ out/blog/ doc/blog.html doc/rss.xml
+ out/doc/sh_javascript.min.js
+
+doc: $(apidoc_dirs) $(website_files) $(apiassets) $(apidocs) tools/doc/ node
$(apidoc_dirs):
mkdir -p $@
@@ -169,9 +148,6 @@ out/doc/api/assets/%: doc/api_assets/% out/doc/api/assets/
out/doc/changelog.html: ChangeLog doc/changelog-head.html doc/changelog-foot.html tools/build-changelog.sh node
bash tools/build-changelog.sh
-out/doc/%.html: doc/%.html node
- cat $< | sed -e 's|__VERSION__|'$(VERSION)'|g' > $@
-
out/doc/%: doc/%
cp -r $< $@
@@ -188,9 +164,6 @@ email.md: ChangeLog tools/email-footer.md
blog.html: email.md
cat $< | ./node tools/doc/node_modules/.bin/marked > $@
-blog-upload: blog
- rsync -r out/blog/ node@nodejs.org:~/web/nodejs.org/blog/
-
website-upload: doc
rsync -r out/doc/ node@nodejs.org:~/web/nodejs.org/
ssh node@nodejs.org '\
View
147 doc/about/index.html
@@ -1,147 +0,0 @@
-<!doctype html>
-<html lang="en">
- <head>
- <meta charset="utf-8">
- <link type="image/x-icon" rel="icon" href="../favicon.ico">
- <link type="image/x-icon" rel="shortcut icon" href="../favicon.ico">
- <link rel="stylesheet" href="../pipe.css">
- <link rel="stylesheet" href="../sh.css">
- <link rel="alternate"
- type="application/rss+xml"
- title="node blog"
- href="http://feeds.feedburner.com/nodejs/123123123">
- <title>node.js</title>
- </head>
- <body class="alt int" id="about">
- <div id="intro" class="interior">
- <a href="/" title="Go back to the home page">
- <img id="logo" src="http://nodejs.org/images/logo-light.png" alt="node.js">
- </a>
- </div>
- <div id="content" class="clearfix">
- <div id="column2" class="interior">
- <ul>
- <li><a href="/" class="home">Home</a></li>
- <li><a href="/download/" class="download">Download</a></li>
- <li><a href="/about/" class="about current">About</a></li>
- <li><a href="http://npmjs.org/" class="npm">npm Registry</a></li>
- <li><a href="http://nodejs.org/api/" class="docs">Docs</a></li>
- <li><a href="http://blog.nodejs.org" class="blog">Blog</a></li>
- <li><a href="/community/" class="community">Community</a></li>
- <li><a href="/logos/" class="logos">Logos</a></li>
- <li><a href="http://jobs.nodejs.org/" class="jobs">Jobs</a></li>
- </ul>
- <p class="twitter"><a href="http://twitter.com/nodejs">@nodejs</a></p>
- </div>
-
- <div id="column1" class="interior">
- <h1>Node's goal is to provide an easy way to build scalable
- network programs</h1>
-
-
- <p>In the "hello world" web server example
- below, many client connections can be handled concurrently.
- Node tells the operating system (through <code>epoll</code>,
- <code>kqueue</code>, <code>/dev/poll</code>, or
- <code>select</code>) that it should be notified when a new
- connection is made, and then it goes to sleep. If someone new
- connects, then it executes the callback. Each connection is
- only a small heap allocation.</p>
-
- <pre>
-var http = require('http');
-http.createServer(function (req, res) {
- res.writeHead(200, {'Content-Type': 'text/plain'});
- res.end('Hello World\n');
-}).listen(1337, "127.0.0.1");
-console.log('Server running at http://127.0.0.1:1337/');</pre>
- <p>This is in contrast to today's more common concurrency
- model where OS threads are employed. Thread-based networking
- is relatively inefficient and very difficult to use. See: <a
- href="http://www.kegel.com/c10k.html">this</a> and <a
- href="http://bulk.fefe.de/scalable-networking.pdf">this</a>.
- Node will show much better memory efficiency under high-loads
- than systems which allocate 2mb thread stacks for each
- connection. Furthermore, users of Node are free from worries
- of dead-locking the process—there are no locks. Almost no
- function in Node directly performs I/O, so the process never
- blocks. Because nothing blocks, less-than-expert programmers
- are able to develop fast systems.</p>
-
- <p>Node is similar in design to and influenced by systems like
- Ruby's <a href="http://rubyeventmachine.com/">Event
- Machine</a> or Python's <a
- href="http://twistedmatrix.com/">Twisted</a>. Node takes the
- event model a bit further—it presents the event loop as a
- language construct instead of as a library. In other systems
- there is always a blocking call to start the event-loop.
- Typically one defines behavior through callbacks at the
- beginning of a script and at the end starts a server through a
- blocking call like <code>EventMachine::run()</code>. In Node
- there is no such start-the-event-loop call. Node simply enters
- the event loop after executing the input script. Node exits
- the event loop when there are no more callbacks to perform.
- This behavior is like browser javascript—the event loop is
- hidden from the user.</p>
-
- <p>HTTP is a first class protocol in Node. Node's HTTP library
- has grown out of the author's experiences developing and
- working with web servers. For example, streaming data through
- most web frameworks is impossible. Node attempts to correct
- these problems in its HTTP <a
- href="https://github.com/joyent/http-parser/tree/master">parser</a>
- and API. Coupled with Node's purely evented infrastructure, it
- makes a good foundation for web libraries or frameworks.</p>
-
- <p>But what about multiple-processor concurrency? Aren't
- threads necessary to scale programs to multi-core computers?
- You can start new processes via <code><a
- href="http://nodejs.org/api/child_process.html#child_process.fork">child_process.fork()</a></code>
- these other processes will be scheduled in parallel. For load
- balancing incoming connections across multiple processes use
- <a href="http://nodejs.org/api/cluster.html">the
- cluster module</a>.</p>
-
- <p>See also:</p>
- <ul>
- <li><a href="http://s3.amazonaws.com/four.livejournal/20091117/jsconf.pdf">Slides from JSConf 2009</a></li>
- <li><a href="http://nodejs.org/jsconf2010.pdf">Slides from JSConf 2010</a></li>
- <li><a href="http://www.yuiblog.com/blog/2010/05/20/video-dahl/">Video from a talk at Yahoo in May 2010</a></li>
- </ul>
- </div>
-</div>
- <div id="footer">
- <a href="http://joyent.com" class="joyent-logo">Joyent</a>
- <ul class="clearfix">
- <li><a href="/">Node.js</a></li>
- <li><a href="/#download">Download</a></li>
- <li><a href="/about/">About</a></li>
- <li><a href="http://npmjs.org/">npm Registry</a></li>
- <li><a href="http://nodejs.org/api/">Docs</a></li>
- <li><a href="http://blog.nodejs.org">Blog</a></li>
- <li><a href="/community/">Community</a></li>
- <li><a href="/logos/">Logos</a></li>
- <li><a href="http://jobs.nodejs.org/">Jobs</a></li>
- <li><a href="http://twitter.com/nodejs" class="twitter">@nodejs</a></li>
- </ul>
-
- <p>Copyright <a href="http://joyent.com/">Joyent, Inc</a>, Node.js is a <a href="/trademark-policy.pdf">trademark</a> of Joyent, Inc. View <a href="https://raw.github.com/joyent/node/__VERSION__/LICENSE">license</a>.</p>
- </div>
-
-
- <script src="../sh_main.js"></script>
- <script src="../sh_javascript.min.js"></script>
- <script>highlight(undefined, undefined, 'pre');</script>
-
- <script>
- window._gaq = [['_setAccount', 'UA-10874194-2'], ['_trackPageview']];
- (function(d, t) {
- var g = d.createElement(t),
- s = d.getElementsByTagName(t)[0];
- g.src = '//www.google-analytics.com/ga.js';
- s.parentNode.insertBefore(g, s);
- }(document, 'script'));
- </script>
-
- </body>
-</html>
View
241 doc/blog.html
@@ -1,241 +0,0 @@
-<!DOCTYPE html>
-
-<html lang="en">
-<head>
- <meta charset="utf-8">
- <link rel="stylesheet" href="http://nodejs.org/pipe.css">
- <link rel="stylesheet" href="http://nodejs.org/sh_vim-dark.css">
- <style>
- #column1 h1 {
- clear:both;
- }
- #column1 {
- font-size: 14px;
- }
- #column1 li, #content h1 + p {
- color:inherit;
- font-family: inherit;
- font-size: 14px;
- line-height:24px;
- }
- #column1 li p + ul {
- margin-top:-1em;
- }
- #column1 ul li ul {
- font-size:12px;
- line-height:24px;
- padding-left: 0;
- }
- #column1 ul li ul li {
- list-style:none;
- margin-left:0;
- }
- #column1 ul li ul li:before {
- content: "- ";
- }
- #content #column1 p.meta, #content #column1 p.respond {
- font-size: 14px;
- line-height: 24px;
- color:#690;
- font-family: inherit;
- }
- #content #column1 p.respond {
- font-style:italic;
- padding:2em 0;
- }
- #column1 a {
- color: #8c0;
- }
- #column2 ul {
- padding:0;
- }
- #column2 {
- margin-top:-66px!important;
- }
- div.post-in-feed {
- padding-bottom:1em;
- }
-
- p.next { float:right; width: 40%; text-align:right; }
- p.prev { float:left; width:40%; text-align:left; }
-
- pre { overflow: auto; }
- </style>
-
- <title><%= title || "Node.js Blog" %></title>
- <link rel="alternate" type="application/rss+xml"
- title="Node.js Blog RSS"
- href="http://blog.nodejs.org/feed<%=
- (typeof posts !== 'undefined') ? uri : '/'
- %>">
-</head>
-
-<body class="int blog" id="<%= pageid %>">
- <div id="intro" class="interior">
- <a href="/" title="Go back to the home page"><img id="logo" src=
- "http://nodejs.org/logo.png" alt="node.js"></a>
- </div>
-
- <div id="content" class="clearfix">
- <div id="column2" class="interior">
- <ul>
- <li><a href="http://nodejs.org/" class="home">Home</a></li>
-
- <li><a href="http://nodejs.org/download/" class=
- "download">Download</a></li>
-
- <li><a href="http://nodejs.org/about/" class="about">About</a></li>
-
- <li><a href="http://npmjs.org/" class="npm">npm
- Registry</a></li>
-
- <li><a href="http://nodejs.org/api/" class="docs">Docs</a></li>
-
- <li><a href="http://blog.nodejs.org/" class="blog current">Blog</a></li>
-
- <li><a href="http://nodejs.org/community/" class=
- "community">Community</a></li>
-
- <li><a href="http://nodejs.org/logos/" class=
- "logos">Logos</a></li>
-
- <li><a href="http://jobs.nodejs.org/" class="jobs">Jobs</a></li>
- </ul>
-
- <p class="twitter"><a href="http://twitter.com/nodejs">@nodejs</a></p>
- </div>
-
- <div id="column1" class="interior">
- <h1><%- title %></h1>
- <% if (typeof post !== 'undefined') {
- // just one post on this page
- %>
- <p class="meta"><%-
- post.date.toUTCString().replace(/ GMT$/, '') + ' UTC' +
- (post.author ? ' - ' + post.author : '') +
- (post.category ? ' - <a href="/' + post.category + '/">' +
- post.category + '</a>' : '')
- %></p>
-
- <%- post.content %>
-
- <p class="respond">Please post feedback and comments on
- <a href="https://groups.google.com/group/nodejs">the Node.JS
- user mailing list</a>.<br>
- Please post bugs and feature requests on
- <a href="https://github.com/joyent/node/issues">the Node.JS
- github repository</a>.</p>
-
- <%
- if (post.next || post.prev) {
- if (post.prev) {
- %><p class="prev"><a href="<%=
- post.prev.permalink
- %>">&larr; <%=
- post.prev.title
- %></a></p>
- <%
- }
- if (post.next) {
- %><p class="next"><a href="<%=
- post.next.permalink
- %>"><%=
- post.next.title
- %> &rarr;</a></p>
- <%
- }
- }
- } else { // not single post page
- if (paginated && total > 1 ) {
- if (page > 0) {
- // add 1 to all of the displayed numbers, because
- // humans are not zero-indexed like they ought to be.
- %>
- <p class="prev"><a href="<%= uri + (page - 1) %>">
- &larr; Page <%- page %>
- </a></p>
- <%
- }
- if (page < total - 1) { %>
- <p class="next"><a href="<%= uri + (page + 1) %>">
- Page <%- page + 2 %> &rarr;
- </a></p>
- <%
- }
- }
-
- posts.forEach(function(post) {
- %>
- <div class="post-in-feed">
- <h1><a href="<%=
- post.permalink
- %>" class="permalink"><%-
- post.title
- %></a></h1>
-
- <p class="meta"><%-
- post.date.toUTCString().replace(/ GMT$/, '') + ' UTC' +
- (post.author ? ' - ' + post.author : '') +
- (post.category ? ' - <a href="/' + post.category + '/">' +
- post.category + '</a>' : '')
- %></p>
-
- <%- post.content %>
- </div>
- <%
- });
-
- if (paginated && total > 1 ) {
- if (page > 0) {
- // add 1 to all of the displayed numbers, because
- // humans are not zero-indexed like they ought to be.
- %>
- <p class="prev"><a href="<%= uri + (page - 1) %>">
- &larr; Page <%- page %>
- </a></p>
- <%
- }
- if (page < total - 1) { %>
- <p class="next"><a href="<%= uri + (page + 1) %>">
- Page <%- page + 2 %> &rarr;
- </a></p>
- <%
- }
- } // pagination
- } // not a single post
- %>
- </div>
- </div>
-
- <div id="footer">
- <a href="http://joyent.com" class="joyent-logo">Joyent</a>
- <ul class="clearfix">
- <li><a href="http://nodejs.org/">Node.js</a></li>
- <li><a href="http://nodejs.org/download/">Download</a></li>
- <li><a href="http://nodejs.org/about/">About</a></li>
- <li><a href="http://npmjs.org/">npm Registry</a></li>
- <li><a href="http://nodejs.org/api/">Docs</a></li>
- <li><a href="http://blog.nodejs.org">Blog</a></li>
- <li><a href="http://nodejs.org/community/">Community</a></li>
- <li><a href="https://nodejs.org/logos/">Logos</a></li>
- <li><a href="http://jobs.nodejs.org/">Jobs</a></li>
- <li><a href="http://twitter.com/nodejs" class="twitter">@nodejs</a></li>
- </ul>
-
- <p>Copyright <a href="http://joyent.com/">Joyent, Inc</a>, Node.js is a <a href="/trademark-policy.pdf">trademark</a> of Joyent, Inc. View <a href="https://raw.github.com/joyent/node/master/LICENSE">license</a>.</p>
- </div>
-
- <script src="../sh_main.js"></script>
- <script src="../sh_javascript.min.js"></script>
- <script>highlight(undefined, undefined, 'pre');</script>
- <script>
- window._gaq = [['_setAccount', 'UA-10874194-2'], ['_trackPageview']];
- (function(d, t) {
- var g = d.createElement(t),
- s = d.getElementsByTagName(t)[0];
- g.src = '//www.google-analytics.com/ga.js';
- s.parentNode.insertBefore(g, s);
- }(document, 'script'));
- </script>
-</body>
-</html>
View
28 doc/blog/README.md
@@ -1,28 +0,0 @@
-title: README.md
-status: private
-
-# How This Blog Works
-
-Each `.md` file in this folder structure is a blog post. It has a
-few headers and a markdown body. (HTML is allowed in the body as well.)
-
-The relevant headers are:
-
-1. title
-2. author
-3. status: Only posts with a status of "publish" are published.
-4. category: The "release" category is treated a bit specially.
-5. version: Only relevant for "release" category.
-6. date
-7. slug: The bit that goes on the url. Must be unique, will be
- generated from the title and date if missing.
-
-Posts in the "release" category are only shown in the main lists when
-they are the most recent release for that version family. The stable
-branch supersedes its unstable counterpart, so the presence of a `0.8.2`
-release notice will cause `0.7.10` to be hidden, but `0.6.19` would
-be unaffected.
-
-The folder structure in the blog source does not matter. Organize files
-here however makes sense. The metadata will be sorted out in the build
-later.
View
16 doc/blog/Uncategorized/an-easy-way-to-build-scalable-network-programs.md
@@ -1,16 +0,0 @@
-title: An Easy Way to Build Scalable Network Programs
-author: ryandahl
-date: Tue Oct 04 2011 15:39:56 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: an-easy-way-to-build-scalable-network-programs
-
-Suppose you're writing a web server which does video encoding on each file upload. Video encoding is very much compute bound. Some recent blog posts suggest that Node.js would fail miserably at this.
-
-Using Node does not mean that you have to write a video encoding algorithm in JavaScript (a language without even 64 bit integers) and crunch away in the main server event loop. The suggested approach is to separate the I/O bound task of receiving uploads and serving downloads from the compute bound task of video encoding. In the case of video encoding this is accomplished by forking out to ffmpeg. Node provides advanced means of asynchronously controlling subprocesses for work like this.
-
-It has also been suggested that Node does not take advantage of multicore machines. Node has long supported load-balancing connections over multiple processes in just a few lines of code - in this way a Node server will use the available cores. In coming releases we'll make it even easier: just pass <code>--balance</code> on the command line and Node will manage the cluster of processes.
-
-Node has a clear purpose: provide an easy way to build scalable network programs. It is not a tool for every problem. Do not write a ray tracer with Node. Do not write a web browser with Node. Do however reach for Node if tasked with writing a DNS server, DHCP server, or even a video encoding server.
-
-By relying on the kernel to schedule and preempt computationally expensive tasks and to load balance incoming connections, Node appears less magical than server platforms that employ userland scheduling. So far, our focus on simplicity and transparency has paid off: <a href="http://www.joyent.com/blog/node-js-meetup-distributed-web-architectures/">the</a> <a href="http://venturebeat.com/2011/08/16/linkedin-node/">number</a> <a href="http://corp.klout.com/blog/2011/10/the-tech-behind-klout-com/">of</a> <a href="http://www.joelonsoftware.com/items/2011/09/13.html">success</a> <a href="http://pow.cx/">stories</a> from developers and corporations who are adopting the technology continues to grow.
View
17 doc/blog/Uncategorized/bnoordhuis-departure.md
@@ -1,17 +0,0 @@
-title: Ben Noordhuis's Departure
-date: Tue Dec 3 14:13:57 PST 2013
-slug: bnoordhuis-departure
-
-As of this past weekend, Ben Noordhuis has decided to step away from
-Node.js and libuv, and is no longer acting as a core committer.
-
-Ben has done a tremendous amount of great work in the past. We're sad
-to lose the benefit of his continued hard work and expertise, and
-extremely grateful for what he has added to Node.js and libuv over the
-years.
-
-Many of you already have expressed your opinion regarding recent
-drama, and I'd like to ask that you please respect our wishes to let
-this issue rest, so that we can all focus on the road forward.
-
-Thanks.
View
25 doc/blog/Uncategorized/development-environment.md
@@ -1,25 +0,0 @@
-title: Development Environment
-author: ryandahl
-date: Mon Apr 04 2011 20:16:27 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: development-environment
-
-If you're compiling a software package because you need a particular version (e.g. the latest), then it requires a little bit more maintenance than using a package manager like <code>dpkg</code>. Software that you compile yourself should *not* go into <code>/usr</code>, it should go into your home directory. This is part of being a software developer.
-
-One way of doing this is to install everything into <code>$HOME/local/$PACKAGE</code>. Here is how I install node on my machine:<pre>./configure --prefix=$HOME/local/node-v0.4.5 &amp;&amp; make install</pre>
-
-To have my paths automatically set I put this inside my <code>$HOME/.zshrc</code>:<pre>PATH="$HOME/local/bin:/opt/local/bin:/usr/bin:/sbin:/bin"
-LD_LIBRARY_PATH="/opt/local/lib:/usr/local/lib:/usr/lib"
-for i in $HOME/local/*; do
- [ -d $i/bin ] &amp;&amp; PATH="${i}/bin:${PATH}"
- [ -d $i/sbin ] &amp;&amp; PATH="${i}/sbin:${PATH}"
- [ -d $i/include ] &amp;&amp; CPATH="${i}/include:${CPATH}"
- [ -d $i/lib ] &amp;&amp; LD_LIBRARY_PATH="${i}/lib:${LD_LIBRARY_PATH}"
- [ -d $i/lib/pkgconfig ] &amp;&amp; PKG_CONFIG_PATH="${i}/lib/pkgconfig:${PKG_CONFIG_PATH}"
- [ -d $i/share/man ] &amp;&amp; MANPATH="${i}/share/man:${MANPATH}"
-done</pre>
-
-Node is under sufficiently rapid development that <i>everyone</i> should be compiling it themselves. A corollary of this is that <code>npm</code> (which should be installed alongside Node) does not require root to install packages.
-
-CPAN and RubyGems have blurred the lines between development tools and system package managers. With <code>npm</code> we wish to draw a clear line: it is not a system package manager. It is not for installing firefox or ffmpeg or OpenSSL; it is for rapidly downloading, building, and setting up Node packages. <code>npm</code> is a <i>development</i> tool. When a program written in Node becomes sufficiently mature it should be distributed as a tarball, <code>.deb</code>, <code>.rpm</code>, or other package system. It should not be distributed to end users with <code>npm</code>.
View
34 doc/blog/Uncategorized/evolving-the-node-js-brand.md
@@ -1,34 +0,0 @@
-title: Evolving the Node.js Brand
-author: Emily Tanaka-Delgado
-date: Mon Jul 11 2011 12:02:45 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: evolving-the-node-js-brand
-
-To echo <a href="http://nodejs.org/">Node</a>’s evolutionary nature, we have refreshed the identity to help mark an exciting time for developers, businesses and users who benefit from the pioneering technology.
-
-<strong>Building a brand</strong>
-
-We began exploring elements to express Node.js and jettisoned preconceived notions about what we thought Node should look like, and focused on what Node is: <strong>kinetic</strong>,<ins cite="mailto:EMILY%20TANAKA-DELGADO" datetime="2011-07-09T18:32"></ins><strong>connected</strong>, <strong>scalable</strong>, <strong>modular</strong>, <strong>mechanical</strong> and <strong>organic</strong>. Working with designer <a href="http://www.chrisglass.com">Chris Glass</a>, our explorations emphasized Node's dynamism and formed a visual language based on structure, relationships and interconnectedness.
-
-<img class="alignnone size-full wp-image-184" title="grid" src="http://nodeblog.files.wordpress.com/2011/07/grid.png" alt="" width="520" height="178" />
-
-Inspired by <strong>process visualization, </strong>we discovered pattern, form, and by relief, the hex shape. The angled infrastructure encourages energy to move through the letterforms.
-
-<img class="alignnone size-full wp-image-185" title="nodejs" src="http://nodeblog.files.wordpress.com/2011/07/nodejs.png" alt="" width="520" height="178" />
-
-This language can expand into the organic network topography of Node or distill down into a single hex connection point.
-
-This scaling represents the dynamic nature of Node in a simple, distinct manner.
-
-<img title="Node.js network" src="http://joyeur.files.wordpress.com/2011/07/network.png" alt="" width="560" height="270" />
-
-We look forward to exploring<ins cite="mailto:EMILY%20TANAKA-DELGADO" datetime="2011-07-09T18:30"> </ins>this visual language as the technology charges into a very promising future.
-
-<img title="Node.js nebula" src="http://joyeur.files.wordpress.com/2011/07/node.png" alt="" width="560" height="460" />
-
-We hope you'll have fun using it.
-
-To download the new logo, visit <a href="http://nodejs.org/logos/">nodejs.org/logos</a>.
-
-<ins cite="mailto:EMILY%20TANAKA-DELGADO" datetime="2011-07-09T18:32"><img title="Tri-color Node" src="http://joyeur.files.wordpress.com/2011/07/tri-color-node.png" alt="" width="560" height="180" /></ins>
View
12 doc/blog/Uncategorized/growing-up.md
@@ -1,12 +0,0 @@
-title: Growing up
-author: ryandahl
-date: Thu Dec 15 2011 11:59:15 GMT-0800 (PST)
-status: publish
-category: Uncategorized
-slug: growing-up
-
-This week Microsoft announced <a href="https://www.windowsazure.com/en-us/develop/nodejs/">support for Node in Windows Azure</a>, their cloud computing platform. For the Node core team and the community, this is an important milestone. We've worked hard over the past six months reworking Node's machinery to support IO completion ports and Visual Studio to provide a good native port to Windows. The overarching goal of the port was to expand our user base to the largest number of developers. Happily, this has paid off in the form of being a first class citizen on Azure. Many users who would have never used Node as a pure unix tool are now up and running on the Windows platform. More users translates into a deeper and better ecosystem of modules, which makes for a better experience for everyone.
-
-We also redesigned <a href="http://nodejs.org">our website</a> - something that we've put off for a long time because we felt that Node was too nascent to dedicate marketing to it. But now that we have binary distributions for Macintosh and Windows, have bundled npm, and are <a href="https://twitter.com/#!/mranney/status/145778414165569536">serving millions of users</a> at various companies, we felt ready to indulge in a new website and share of a few of our success stories on the home page.
-
-Work is on-going. We continue to improve the software, making performance improvements and adding isolate support, but Node is growing up.
View
14 doc/blog/Uncategorized/jobs-nodejs-org.md
@@ -1,14 +0,0 @@
-title: jobs.nodejs.org
-author: ryandahl
-date: Thu Mar 24 2011 23:05:22 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: jobs-nodejs-org
-
-We are starting an official jobs board for Node. There are two goals for this
-
-1. Promote the small emerging economy around this platform by having a central space for employers to find Node programmers.
-
-2. Make some money. We work hard to build this platform and taking a small tax for job posts seems a like reasonable "tip jar".
-
-<a href="http://jobs.nodejs.org">jobs.nodejs.org</a>
View
84 doc/blog/Uncategorized/ldapjs-a-reprise-of-ldap.md
@@ -1,84 +0,0 @@
-title: ldapjs: A reprise of LDAP
-author: mcavage
-date: Thu Sep 08 2011 14:25:43 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: ldapjs-a-reprise-of-ldap
-
-This post has been about 10 years in the making. My first job out of college was at IBM working on the <a title="Tivoli Directory Server" href="http://www-01.ibm.com/software/tivoli/products/directory-server/">Tivoli Directory Server</a>, and at the time I had a preconceived notion that working on anything related to Internet RFCs was about as hot as you could get. I spent a lot of time back then getting "down and dirty" with everything about LDAP: the protocol, performance, storage engines, indexing and querying, caching, customer use cases and patterns, general network server patterns, etc. Basically, I soaked up as much as I possibly could while I was there. On top of that, I listened to all the "gray beards" tell me about the history of LDAP, which was a bizarre marriage of telecommunications conglomerates and graduate students. The point of this blog post is to give you a crash course in LDAP, and explain what makes <a title="ldapjs" href="http://ldapjs.org">ldapjs</a> different. Allow me to be the gray beard for a bit...
-<h2>What is LDAP and where did it come from?</h2>
-
-Directory services were largely pioneered by the telecommunications companies (e.g., AT&amp;T) to allow fast information retrieval of all the crap you'd expect would be in a telephone book and directory. That is, given a name, or an address, or an area code, or a number, or a foo support looking up customer records, billing information, routing information, etc. The efforts of several telcos came to exist in the <a title="X.500" href="http://en.wikipedia.org/wiki/X.500">X.500</a> standard(s). An X.500 directory is one of the most complicated beasts you can possibly imagine, but on a high note, there's
-probably not a thing you can imagine in a directory service that wasn't thought of in there. It is literally the kitchen sink. Oh, and it doesn't run over IP (it's <em>actually</em> on the <a title="OSI Model" href="http://en.wikipedia.org/wiki/OSI_model">OSI</a> model).
-
-Several years after X.500 had been deployed (at telcos, academic institutions, etc.), it became clear that the Internet was "for real." <a title="LDAP" href="http://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol">LDAP</a>, the "Lightweight Directory Access Protocol," was invented to act purely as an IP-accessible gateway to an X.500 directory.
-
-At some point in the early 90's, a <a title="Tim Howes" href="http://en.wikipedia.org/wiki/Tim_Howes">graduate student</a> at the University of Michigan (with some help) cooked up the "grandfather" implementation of the LDAP protocol, which wasn't actually a "gateway," but rather a stand-alone implementation of LDAP. Said implementation, like many things at the time, was a process-per-connection concurrency model, and had "backends" (aka storage engine) for the file system and the Unix DB API. At some point the <a title="Berkeley Database" href="http://www.oracle.com/technetwork/database/berkeleydb/index.html">Berkeley Database </a>(BDB) was put in, and still remains the de facto storage engine for most LDAP directories.
-
-Ok, so some a graduate student at UM wrote an LDAP server that wasn't a gateway. So what? Well, that UM code base turns out to be the thing that pretty much every vendor did a source license for. Those graduate students went off to Netscape later in the 90's, and largely dominated the market of LDAP middleware until <a title="Active Directory" href="http://en.wikipedia.org/wiki/Active_Directory">Active Directory</a> came along many years later (as far as I know, Active Directory is "from scratch", since while it's "almost" LDAP, it's different in a lot of ways). That Netscape code base was further bought and sold over the years to iPlanet, Sun Microsystems, and Red Hat (I'm probably missing somebody in that chain). It now lives in the Fedora umbrella as '<a title="389 Directory Server" href="http://directory.fedoraproject.org/">389 Directory Server</a>.' Probably the most popular fork of that code base now is <a title="OpenLDAP" href="http://www.openldap.org/">OpenLDAP</a>.
-
-IBM did the same thing, and the Directory Server I worked on was a fork of the UM code too, but it heavily diverged from the Netscape branches. The divergence was primarily due to: (1) backing to DB2 as opposed to BDB, and (2) needing to run on IBM's big iron like OS/400 and Z series mainframes.
-
-Macro point is that there have actually been very few "fresh" implementations of LDAP, and it gets a pretty bad reputation because at the end of the day you've got 20 years of "bolt-ons" to grad student code. Oh, and it was born out of ginormous telcos, so of course the protocol is overly complex.
-
-That said, while there certainly is some wacky stuff in the LDAP protocol itself, it really suffered from poor and buggy implementations more than the fact that LDAP itself was fundamentally flawed. As <a title="Engine Yard LDAP" href="http://www.engineyard.com/blog/2009/ldap-directories-the-forgotten-nosql/">engine yard pointed out a few years back</a>, you can think of LDAP as the original NoSQL store.
-<h2>LDAP: The Good Parts</h2>
-
-So what's awesome about LDAP? Since it's a directory system it maintains a hierarchy of your data, which as an information management pattern aligns
-with _a lot_ of use case (the quintessential example is white pages for people in your company, but subscriptions to SaaS applications, "host groups"
-for tracking machines/instances, physical goods tracking, etc., all have use cases that fit that organization scheme). For example, presumably at your job
-you have a "reporting chain." Let's say a given record in LDAP (I'll use myself as a guinea pig here) looks like:
-<pre> firstName: Mark
- lastName: Cavage
- city: Seattle
- uid: markc
- state: Washington
- mail: mcavagegmailcom
- phone: (206) 555-1212
- title: Software Engineer
- department: 123456
- objectclass: joyentPerson</pre>
-The record for me would live under the tree of engineers I report to (and as an example some other popular engineers under said vice president) would look like:
-<pre> uid=david
- /
- uid=bryan
- / | \
- uid=markc uid=ryah uid=isaacs</pre>
-Ok, so we've got a tree. It's not tremendously different from your filesystem, but how do we find people? LDAP has a rich search filter syntax that makes a lot of sense for key/value data (far more than tacking Map Reduce jobs on does, imo), and all search queries take a "start point" in the tree. Here's an example: let's say I wanted to find all "Software Engineers" in the entire company, a filter would look like:
-<pre>     (title="Software Engineer")</pre>
-And I'd just start my search from 'uid=david' in the example above. Let's say I wanted to find all software engineers who worked in Seattle:
-<pre>     (&amp;(title="Software Engineer")(city=Seattle))</pre>
-I could keep going, but the gist is that LDAP has "full" boolean predicate logic, wildcard filters, etc. It's really rich.
-
-Oh, and on top of the technical merits, better or worse, it's an established standard for both administrators and applications (i.e., most "shipped" intranet software has either a local user repository or the ability to leverage an LDAP server somewhere). So there's a lot of compelling reasons to look at leveraging LDAP.
-<h2>ldapjs: Why do I care?</h2>
-
-As I said earlier, I spent a lot of time at IBM observing how customers used LDAP, and the real items I took away from that experience were:
-<ul>
- <li>LDAP implementations have suffered a lot from never having been designed from the ground up for a large number of concurrent connections with asynchronous operations.</li>
- <li>There are use cases for LDAP that just don't always fit the traditional "here's my server and storage engine" model. A lot of simple customer use cases wanted an LDAP access point, but not be forced into taking the heavy backends that came with it (they wanted the original gateway model!). There was an entire "sub" industry for this known as "<a title="Metadirectory" href="http://en.wikipedia.org/wiki/Metadirectory">meta directories</a>" back in the late 90's and early 2000's.</li>
- <li>Replication was always a sticking point. LDAP vendors all tried to offer a big multi-master, multi-site replication model. It was a lot of "bolt-on" complexity, done before the <a title="CAP Theorem" href="http://en.wikipedia.org/wiki/CAP_theorem">CAP theorem</a> was written, and certainly before it was accepted as "truth."</li>
- <li>Nobody uses all of the protocol. In fact, 20% of the features solve 80% of the use cases (I'm making that number up, but you get the idea).</li>
-</ul>
-
-For all the good parts of LDAP, those are really damned big failing points, and even I eventually abandoned LDAP for the greener pastures of NoSQL somewhere
-along the way. But it always nagged at me that LDAP didn't get it's due because of a lot of implementation problems (to be clear, if I could, I'd change some
-aspects of the protocol itself too, but that's a lot harder).
-
-Well, in the last year, I went to work for <a title="Joyent" href="http://www.joyent.com/">Joyent</a>, and like everyone else, we have several use problems that are classic directory service problems. If you break down the list I outlined above:
-<ul>
- <li><strong>Connection-oriented and asynchronous:</strong> Holy smokes batman, <a title="node.js" href="http://nodejs.org/">node.js</a> is a completely kick-ass event-driven asynchronous server platform that manages connections like a boss. Check!</li>
- <li><strong>Lots of use cases:</strong> Yeah, we've got some. Man, the <a title="sinatra" href="http://www.sinatrarb.com/">sinatra</a>/<a title="express" href="http://expressjs.com/">express</a> paradigm is so easy to slap over anything. How about we just do that and leave as many use cases open as we can. Check!</li>
- <li><strong>Replication is hard. CAP is right:</strong> There are a lot of distributed databases out vying to solve exactly this problem. At Joyent we went with <a title="Riak" href="http://www.basho.com/">Riak</a>. Check!</li>
- <li><strong>Don't need all of the protocol:</strong> I'm lazy. Let's just skip the stupid things most people don't need. Check!</li>
-</ul>
-
-So that's the crux of ldapjs right there. Giving you the ability to put LDAP back into your application while nailing those 4 fundamental problems that plague most existing LDAP deployments.
-
-The obvious question is how it turned out, and the answer is, honestly, better than I thought it would. When I set out to do this, I actually assumed I'd be shipping a much smaller percentage of the RFC than is there. There's actually about 95% of the core RFC implemented. I wasn't sure if the marriage of this protocol to node/JavaScript would work out, but if you've used express ever, this should be _really_ familiar. And I tried to make it as natural as possible to use "pure" JavaScript objects, rather than requiring the developer to understand <a title="ASN.1" href="http://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One">ASN.1</a> (the binary wire protocol) or the<a title="RFC 4510" href="http://tools.ietf.org/html/rfc4510"> LDAP RFC</a> in detail (this one mostly worked out; ldap_modify is still kind of a PITA).
-
-Within 24 hours of releasing ldapjs on <a title="twitter" href="http://twitter.com/#!/mcavage/status/106767571012952064">Twitter</a>, there was an <a title="github ldapjs address book" href="https://gist.github.com/1173999">implementation of an address book</a> that works with Thunderbird/Evolution, by the end of that weekend there was some <a href="http://i.imgur.com/uR16U.png">slick integration with CouchDB</a>, and ldapjs even got used in one of the <a href="http://twitter.com/#!/jheusala/status/108977708649811970">node knockout apps</a>. Off to a pretty good start!
-
-<h2>The Road Ahead</h2>
-
-Hopefully you've been motivated to learn a little bit more about LDAP and try out <a href="http://ldapjs.org">ldapjs</a>. The best place to start is probably the <a title="ldapjs guide" href="http://ldapjs.org/guide.html">guide</a>. After that you'll probably need to pick up a book from <a href="http://www.amazon.com/Understanding-Deploying-LDAP-Directory-Services/dp/0672323168">back in the day</a>. ldapjs itself is still in its infancy; there's quite a bit of room to add some slick client-side logic (e.g., connection pools, automatic reconnects), easy to use schema validation, backends, etc. By the time this post is live, there will be experimental <a href="http://en.wikipedia.org/wiki/DTrace">dtrace</a> support if you're running on Mac OS X or preferably Joyent's <a href="http://smartos.org/">SmartOS</a> (shameless plug). And that nagging percentage of the protocol I didn't do will get filled in over time I suspect. If you've got an interest in any of this, send me some pull requests, but most importantly, I just want to see LDAP not just be a skeleton in the closet and get used in places where you should be using it. So get out there and write you some LDAP.
View
45 doc/blog/Uncategorized/libuv-status-report.md
@@ -1,45 +0,0 @@
-title: libuv status report
-author: ryandahl
-date: Fri Sep 23 2011 12:45:50 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: libuv-status-report
-
-We <a href="http://blog.nodejs.org/2011/06/23/porting-node-to-windows-with-microsoft%E2%80%99s-help/">announced</a> back in July that with Microsoft's support Joyent would be porting Node to Windows. This effort is ongoing but I thought it would be nice to make a status report post about the new platform library <code><a href="https://github.com/joyent/libuv">libuv</a></code> which has resulted from porting Node to Windows.
-
-<code>libuv</code>'s purpose is to abstract platform-dependent code in Node into one place where it can be tested for correctness and performance before bindings to V8 are added. Since Node is totally non-blocking, <code>libuv</code> turns out to be a rather useful library itself: a BSD-licensed, minimal, high-performance, cross-platform networking library.
-
-We attempt to not reinvent the wheel where possible. The entire Unix backend sits heavily on Marc Lehmann's beautiful libraries <a href="http://software.schmorp.de/pkg/libev.html">libev</a> and <a href="http://software.schmorp.de/pkg/libeio.html">libeio</a>. For DNS we integrated with Daniel Stenberg's <a href="http://c-ares.haxx.se/">C-Ares</a>. For cross-platform build-system support we're relying on Chrome's <a href="http://code.google.com/p/gyp/">GYP</a> meta-build system.
-
-The current implemented features are:
-<ul>
- <li>Non-blocking TCP sockets (using IOCP on Windows)</li>
- <li>Non-blocking named pipes</li>
- <li>UDP</li>
- <li>Timers</li>
- <li>Child process spawning</li>
- <li>Asynchronous DNS via <a href="http://c-ares.haxx.se/">c-ares</a> or <code>uv_getaddrinfo</code>.</li>
- <li>Asynchronous file system APIs <code>uv_fs_*</code></li>
- <li>High resolution time <code>uv_hrtime</code></li>
- <li>Current executable path look up <code>uv_exepath</code></li>
- <li>Thread pool scheduling <code>uv_queue_work</code></li>
-</ul>
-The features we are working on still are
-<ul>
- <li>File system events (Currently supports inotify, <code>ReadDirectoryChangesW</code> and will support kqueue and event ports in the near future.) <code>uv_fs_event_t</code></li>
- <li>VT100 TTY <code>uv_tty_t</code></li>
- <li>Socket sharing between processes <code>uv_ipc_t (<a href="https://gist.github.com/1233593">planned API</a>)</code></li>
-</ul>
-For complete documentation see the header file: <a href="https://github.com/joyent/libuv/blob/03d0c57ea216abd611286ff1e58d4e344a459f76/include/uv.h">include/uv.h</a>. There are a number of tests in <a href="https://github.com/joyent/libuv/tree/3ca382be741ec6ce6a001f0db04d6375af8cd642/test">the test directory</a> which demonstrate the API.
-
-<code>libuv</code> supports Microsoft Windows operating systems since Windows XP SP2. It can be built with either Visual Studio or MinGW. Solaris 121 and later using GCC toolchain. Linux 2.6 or better using the GCC toolchain. Macinotsh Darwin using the GCC or XCode toolchain. It is known to work on the BSDs but we do not check the build regularly.
-
-In addition to Node v0.5, a number of projects have begun to use <code>libuv</code>:
-<ul>
- <li>Mozilla's <a href="https://github.com/graydon/rust">Rust</a></li>
- <li>Tim Caswell's <a href="https://github.com/creationix/luanode">LuaNode</a></li>
- <li>Ben Noordhuis and Bert Belder's <a href="https://github.com/bnoordhuis/phode">Phode</a> async PHP project</li>
- <li>Kerry Snyder's <a href="https://github.com/kersny/libuv-csharp">libuv-csharp</a></li>
- <li>Andrea Lattuada's <a href="https://gist.github.com/1195428">web server</a></li>
-</ul>
-We hope to see more people contributing and using <code>libuv</code> in the future!
View
11 doc/blog/Uncategorized/node-meetup-this-thursday.md
@@ -1,11 +0,0 @@
-title: Node Meetup this Thursday
-author: ryandahl
-date: Tue Aug 02 2011 21:37:02 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: node-meetup-this-thursday
-
-<a href="http://nodejs.org/meetup/" title="http://nodejs.org/meetup/ ">http://nodejs.org/meetup/</a>
-<a href="http://nodemeetup.eventbrite.com/">http://nodemeetup.eventbrite.com/</a>
-
-Three companies will describe their distributed Node applications. Sign up soon, space is limited!
View
12 doc/blog/Uncategorized/node-office-hours-cut-short.md
@@ -1,12 +0,0 @@
-title: Node Office Hours Cut Short
-author: ryandahl
-date: Thu Apr 28 2011 09:04:35 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: node-office-hours-cut-short
-
-This week office hours are only from 4pm to 6pm. Isaac will be in the Joyent office in SF - everyone else is out of town. Sign up at http://nodeworkup.eventbrite.com/ if you would like to come.
-
-The week after, Thursday May 5th, we will all be at NodeConf in Portland.
-
-Normal office hours resume Thursday May 12th.
View
12 doc/blog/Uncategorized/office-hours.md
@@ -1,12 +0,0 @@
-title: Office Hours
-author: ryandahl
-date: Wed Mar 23 2011 21:42:47 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: office-hours
-
-Starting next Thursday Isaac, Tom, and I will be holding weekly office hours at <a href="http://maps.google.com/maps?q=345+California+St,+San+Francisco,+CA+94104&amp;layer=c&amp;sll=37.793040,-122.400491&amp;cbp=13,178.31,,0,-60.77&amp;cbll=37.793131,-122.400484&amp;hl=en&amp;sspn=0.006295,0.006295&amp;ie=UTF8&amp;hq=&amp;hnear=345+California+St,+San+Francisco,+California+94104&amp;ll=37.793131,-122.400484&amp;spn=0.001295,0.003428&amp;z=19&amp;panoid=h0dlz3VG-hMKlzOu0LxMIg">Joyent HQ</a> in San Francisco. Office hours are meant to be subdued working time - there are no talks and no alcohol. Bring your bugs or just come and hack with us.
-
-Our building requires that everyone attending be on a list so you must sign up at <a href="http://nodeworkup01.eventbrite.com/">Event Brite</a>.
-
-We start at 4p and end promptly at 8p.
View
12 doc/blog/Uncategorized/porting-node-to-windows-with-microsoft%e2%80%99s-help.md
@@ -1,12 +0,0 @@
-title: Porting Node to Windows With Microsoft’s Help
-author: ryandahl
-date: Thu Jun 23 2011 15:22:58 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: porting-node-to-windows-with-microsoft%e2%80%99s-help
-
-I'm pleased to announce that Microsoft is partnering with Joyent in formally contributing resources towards porting Node to Windows. As you may have heard in <a href="http://nodejs.org/nodeconf.pdf" title="a talk">a talk</a> we gave earlier this year, we have started the undertaking of a native port to Windows - targeting the high-performance IOCP API.
-
-This requires a rather large modification of the core structure, and we're very happy to have official guidance and engineering resources from Microsoft. <a href="https://www.cloudkick.com/">Rackspace</a> is also contributing <a href="https://github.com/piscisaureus">Bert Belder</a>'s time to this undertaking.
-
-The result will be an official binary node.exe releases on nodejs.org, which will work on Windows Azure and other Windows versions as far back as Server 2003.
View
60 doc/blog/Uncategorized/profiling-node-js.md
@@ -1,60 +0,0 @@
-title: Profiling Node.js
-author: Dave Pacheco
-date: Wed Apr 25 2012 13:48:58 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: profiling-node-js
-
-It's incredibly easy to visualize where your Node program spends its time using DTrace and <a href="http://github.com/davepacheco/node-stackvis">node-stackvis</a> (a Node port of Brendan Gregg's <a href="http://github.com/brendangregg/FlameGraph/">FlameGraph</a> tool):
-
-<ol>
- <li>Run your Node.js program as usual.</li>
- <li>In another terminal, run:
- <pre>
-$ dtrace -n 'profile-97/execname == "node" &amp;&amp; arg1/{
- @[jstack(150, 8000)] = count(); } tick-60s { exit(0); }' &gt; stacks.out</pre>
- This will sample about 100 times per second for 60 seconds and emit results to stacks.out. <strong>Note that this will sample all running programs called "node". If you want a specific process, replace <code>execname == "node"</code> with <code>pid == 12345</code> (the process id).</strong>
- </li>
- <li>Use the "stackvis" tool to transform this directly into a flame graph. First, install it:
- <pre>$ npm install -g stackvis</pre>
- then use <code>stackvis</code> to convert the DTrace output to a flamegraph:
- <pre>$ stackvis dtrace flamegraph-svg &lt; stacks.out &gt; stacks.svg</pre>
- </li>
- <li>Open stacks.svg in your favorite browser.</li>
-</ol>
-
-You'll be looking at something like this:
-
-<a href="http://www.cs.brown.edu/~dap/helloworld.svg"><img src="http://dtrace.org/blogs/dap/files/2012/04/helloworld-flamegraph-550x366.png" alt="" title="helloworld-flamegraph" width="550" height="366" class="aligncenter size-large wp-image-1047" /></a>
-
-This is a visualization of all of the profiled call stacks. This example is from the "hello world" HTTP server on the <a href="http://nodejs.org">Node.js</a> home page under load. Start at the bottom, where you have "main", which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you'll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time.
-
-That's the summary. There are a few prerequisites:
-
-<ul>
- <li>You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means <a href="http://illumos.org/">illumos</a>-based systems like <a href="http://smartos.org/">SmartOS</a>, including the Joyent Cloud. <strong>MacOS users:</strong> OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liaison (if you're lucky enough to have one) or <strong>file a bug report at bugreport.apple.com</strong>. I'd suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.</li>
- <li>You must be on 32-bit Node.js 0.6.7 or later, built <code>--with-dtrace</code>. The helper doesn't work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default.</li>
-</ul>
-
-There are a few other notes:
-
-<ul>
- <li>You can absolutely profile apps <strong>in production</strong>, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program.</li>
- <li>You may want to run the stacks.out output through <code>c++filt</code> to demangle C++ symbols. Be sure to use the <code>c++filt</code> that came with the compiler you used to build Node. For example:
- <pre>c++filt &lt; stacks.out &gt; demangled.out</pre>
- then you can use demangled.out to create the flamegraph.
- </li>
- <li>If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want:
- <pre>
-$ stackvis dtrace collapsed &lt; stacks.out | grep SomeFunction &gt; collapsed.out
-$ stackvis collapsed flamegraph-svg &lt; collapsed.out &gt; stacks.svg</pre>
- </li>
- <li>If you've used Brendan's FlameGraph tools, you'll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I've also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.)
- </li>
-</ul>
-
-For more on the underlying pieces, see my <a href="http://dtrace.org/blogs/dap/2012/01/05/where-does-your-node-program-spend-its-time/">previous post on Node.js profiling</a> and <a href="http://dtrace.org/blogs/brendan/2011/12/16/flame-graphs/">Brendan's post on Flame Graphs</a>.
-
-<hr />
-
-Dave Pacheco blogs at <a href="http://dtrace.org/blogs/dap">dtrace.org</a>
View
13 doc/blog/Uncategorized/some-new-node-projects.md
@@ -1,13 +0,0 @@
-title: Some New Node Projects
-author: ryandahl
-date: Mon Aug 29 2011 08:30:41 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: some-new-node-projects
-
-<ul>
-<li>Superfeedr released <a href="http://blog.superfeedr.com/node-xmpp-server/">a Node XMPP Server</a>. "<i>Since <a href="http://spaceboyz.net/~astro/">astro</a> had been doing an <strong>amazing work</strong> with his <a href="https://github.com/astro/node-xmpp">node-xmpp</a> library to build <em>Client</em>, <em>Components</em> and even <em>Server to server</em> modules, the logical next step was to try to build a <em>Client to Server</em> module so that we could have a full blown server. That&#8217;s what we worked on the past couple days, and <a href="https://github.com/superfeedr/node-xmpp">it&#8217;s now on Github</a>!</i></li>
-
-<li>Joyent's Mark Cavage released <a href="http://ldapjs.org/">LDAP.js</a>. "<i>ldapjs is a pure JavaScript, from-scratch framework for implementing <a href="http://tools.ietf.org/html/rfc4510">LDAP</a> clients and servers in <a href="http://nodejs.org">Node.js</a>. It is intended for developers used to interacting with HTTP services in node and <a href="http://expressjs.com">express</a>.</i></li>
-
-<li>Microsoft's Tomasz Janczuk released <a href="http://tomasz.janczuk.org/2011/08/hosting-nodejs-applications-in-iis-on.html">iisnode</a> "<i>The <a href="https://github.com/tjanczuk/iisnode">iisnode</a> project provides a native IIS 7.x module that allows hosting of node.js applications in IIS.</i><br /><br />Scott Hanselman posted <a href="http://www.hanselman.com/blog/InstallingAndRunningNodejsApplicationsWithinIISOnWindowsAreYouMad.aspx">a detailed walkthrough</a> of how to get started with iisnode
View
10 doc/blog/Uncategorized/the-videos-from-node-meetup.md
@@ -1,10 +0,0 @@
-title: The Videos from the Meetup
-author: ryandahl
-date: Fri Aug 12 2011 00:14:34 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: the-videos-from-node-meetup
-
-Uber, Voxer, and Joyent described how they use Node in production
-
-<a href="http://www.joyent.com/blog/node-js-meetup-distributed-web-architectures/">http://www.joyent.com/blog/node-js-meetup-distributed-web-architectures/</a>
View
54 doc/blog/Uncategorized/tj-fontaine-new-node-lead.md
@@ -1,54 +0,0 @@
-title: The Next Phase of Node.js
-date: Wed Jan 15 09:00:00 PST 2014
-author: Isaac Z. Schlueter
-slug: the-next-phase-of-node-js
-
-Node's growth has continued and accelerated immensely over the last
-few years. More people are developing and sharing more code with Node
-and npm than I would have ever imagined. Countless companies are
-using Node, and npm along with it.
-
-Over the last year, [TJ Fontaine](https://twitter.com/tjfontaine) has become absolutely essential to the
-Node.js project. He's been building releases, managing the test bots,
-[fixing nasty
-bugs](http://www.joyent.com/blog/walmart-node-js-memory-leak) and
-making decisions for the project with constant focus on the needs of
-our users. He was responsible for an update to MDB to [support
-running ::findjsobjects on Linux core
-dumps](http://www.slideshare.net/bcantrill/node-summit2013), and is
-working on a shim layer that will provide a stable C interface for
-Node binary addons. In partnership with Joyent and The Node Firm,
-he's helped to create a path forward for scalable issue triaging.
-He's become the primary point of contact keeping us all driving the
-project forward together.
-
-Anyone who's been close to the core project knows that he's been
-effectively leading the project for a while now, so we're making it
-official. Effective immediately, TJ Fontaine is the Node.js project
-lead. I will remain a Node core committer, and expect to continue to
-contribute to the project in that role. My primary focus, however,
-will be npm.
-
-At this point, npm needs work, and I am eager to deliver what the Node
-community needs from its package manager. I am starting a company,
-npm, Inc., to deliver new products and services related to npm. I'll
-be sharing many more details soon about exactly how this is going to
-work, and what we'll be offering. For now, suffice it to say that
-everything currently free will remain free, and everything currently
-flaky will get less flaky. Pursuing new revenue is how we can keep
-providing the npm registry service in a long-term sustainable way, and
-it has to be done very carefully so that we don't damage what we've
-all built together.
-
-npm is what I'm most passionate about, and I am now in a position to
-give it my full attention. I've done more than I could have hoped to
-accomplish in running Node core, and it's well past time to hand the
-control of the project off to its next gatekeeper.
-
-TJ is exactly the leader who can help us take Node.js to 1.0 and
-beyond. He brings professionalism, rigor, and a continued focus on
-inclusive community values and culture. In the coming days, TJ will
-spell out his plans in greater detail. I look forward to the places
-that Node will go with his guidance.
-
-Please join me in welcoming him to this new role :)
View
17 doc/blog/Uncategorized/trademark.md
@@ -1,17 +0,0 @@
-title: Trademark
-author: ryandahl
-date: Fri Apr 29 2011 01:54:18 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: trademark
-
-One of the things Joyent accepted when we took on the Node project was to provide resources to help the community grow. The Node project is amazing because of the expertise, dedication and hard work of the community. However in all communities there is the possibility of people acting inappropriately. We decided to introduce trademarks on the “Node.js” and the “Node logo” in order to ensure that people or organisations who are not investing in the Node community misrepresent, or create confusion about the role of themselves or their products with Node.
-
-We are big fans of the people who have contributed to Node and we have worked hard to make sure that existing members of the community will be unaffected by this change. For most people they don’t have to do anything they are free to use the Node.js marks in their free open source projects (see guidelines). For others we’ve already granted them licenses to use Node.js marks in their domain names and their businesses. We value all of these contributions to the Node community and hope that we can continue to protect their good names and hard work.
-
-Where does our trademark policy come from? We started by looking at popular open source foundations like the Apache Software Foundation and Linux. By strongly basing our policy on the one used by the Apache Software Foundation we feel that we’ve created a policy which is liberal enough to allow the open source community to easily make use of the mark in the context of free open source software, but secure enough to protect the community’s work from being misrepresented by other organisations.
-
-While we realize that any changes involving lawyers can be intimidating to the community we want to make this transition as smoothly as possible and welcome your questions and feedback on the policy and how we are implementing it.
-
-<a href="http://nodejs.org/trademark-policy.pdf">http://nodejs.org/trademark-policy.pdf</a>
-trademark@joyent.com
View
23 doc/blog/Uncategorized/version-0-6.md
@@ -1,23 +0,0 @@
-title: Version 0.6 Coming Soon
-author: ryandahl
-date: Tue Oct 25 2011 15:26:23 GMT-0700 (PDT)
-status: publish
-category: Uncategorized
-slug: version-0-6
-
-Version 0.6.0 will be released next week. Please spend some time this
-week upgrading your code to v0.5.10. Report any API differences at <a
-href="https://github.com/joyent/node/wiki/API-changes-between-v0.4-and-v0.6">https://github.com/joyent/node/wiki/API-changes-between-v0.4-and-v0.6</a>
-or report a bug to us at <a
-href="http://github.com/joyent/node/issues">http://github.com/joyent/node/issues</a>
-if you hit problems.
-
-The API changes between v0.4.12 and v0.5.10 are 99% cosmetic, minor,
-and easy to fix. Most people are able to migrate their code in 10
-minutes. Don't fear.
-
-Once you've ported your code to v0.5.10 please help out by testing
-third party modules. Make bug reports. Encourage authors to publish
-new versions of their modules. Go through the list of modules at <a
-href="http://npmjs.org/">http://npmjs.org/</a> and try out random
-ones. This is especially encouraged of Windows users!
View
BIN  doc/blog/favicon.ico
Binary file not shown
View
851 doc/blog/feature/streams2.md
@@ -1,851 +0,0 @@
-title: A New Streaming API for Node v0.10
-author: Isaac Z. Schlueter
-date: Fri Dec 21 00:45:13 UTC 2012
-slug: streams2
-category: feature
-
-**tl;dr**
-
-* Node streams are great, except for all the ways in which they're
- terrible.
-* A new Stream implementation is coming in 0.10, that has gotten the
- nickname "streams2".
-* Readable streams have a `read()` method that returns a buffer or
- null. (More documentation included below.)
-* `'data'` events, `pause()`, and `resume()` will still work as before
- (except that they'll actully work how you'd expect).
-* Old programs will **almost always** work without modification, but
- streams start out in a paused state, and need to be read from to be
- consumed.
-* **WARNING**: If you never add a `'data'` event handler, or call
- `resume()`, then it'll sit in a paused state forever and never
- emit `'end'`.
-
--------
-
-Throughout the life of Node, we've been gradually iterating on the
-ideal event-based API for handling data. Over time, this developed
-into the "Stream" interface that you see throughout Node's core
-modules and many of the modules in npm.
-
-Consistent interfaces increase the portability and reliability of our
-programs and libraries. Overall, the move from domain-specific events
-and methods towards a unified stream interface was a huge win.
-However, there are still several problems with Node's streams as of
-v0.8. In a nutshell:
-
-1. The `pause()` method doesn't pause. It is advisory-only. In
- Node's implementation, this makes things much simpler, but it's
- confusing to users, and doesn't do what it looks like it does.
-2. `'data'` events come right away (whether you're ready or not).
- This makes it unreasonably difficult to do common tasks like load a
- user's session before deciding how to handle their request.
-3. There is no way to consume a specific number of bytes, and then
- leave the rest for some other part of the program to deal with.
-4. It's unreasonably difficult to implement streams and get all the
- intricacies of pause, resume, write-buffering, and data events
- correct. The lack of shared classes mean that we all have to solve
- the same problems repeatedly, making similar mistakes and similar
- bugs.
-
-Common simple tasks should be easy, or we aren't doing our job.
-People often say that Node is better than most other platforms at this
-stuff, but in my opinion, that is less of a compliment and more of an
-indictment of the current state of software. Being better than the
-next guy isn't enough; we have to be the best imaginable. While they
-were a big step in the right direction, the Streams in Node up until
-now leave a lot wanting.
-
-So, just fix it, right?
-
-Well, we are sitting on the results of several years of explosive
-growth in the Node community, so any changes have to be made very
-carefully. If we break all the Node programs in 0.10, then no one
-will ever want to upgrade to 0.10, and it's all pointless. We had
-this conversation around 0.4, then again around 0.6, then again around
-0.8. Every time, the conclusion has been "Too much work, too hard to
-make backwards-compatible", and we always had more pressing problems
-to solve.
-
-In 0.10, we cannot put it off any longer. We've bitten the bullet and
-are making a significant change to the Stream implementation. You may
-have seen conversations on twitter or IRC or the mailing list about
-"streams2". I also gave [a talk in
-November](https://dl.dropbox.com/u/3685/presentations/streams2/streams2-ko.pdf)
-about this subject. A lot of node module authors have been involved
-with the development of streams2 (and of course the node core team).
-
-## streams2
-
-The feature is described pretty thoroughly in the documentation, so
-I'm including it below. Please read it, especially the section on
-"compatibility". There's a caveat there that is unfortunately
-unavoidable, but hopefully enough of an edge case that it's easily
-worked around.
-
-The first preview release with this change will be 0.9.4. I highly
-recommend trying this release and providing feedback before it lands
-in a stable version.
-
-As of writing this post, there are some known performance regressions,
-especially in the http module. We are fanatical about maintaining
-performance in Node.js, so of course this will have to be fixed before
-the v0.10 stable release. (Watch for a future blog post on the tools
-and techniques that have been useful in tracking down these issues.)
-
-There may be minor changes as necessary to fix bugs and improve
-performance, but the API at this point should be considered feature
-complete. It correctly does all the things we need it to do, it just
-doesn't do them quite well enough yet. As always, be wary of running
-unstable releases in production, of course, but I encourage you to try
-it out and see what you think. Especially, if you have tests that you
-can run on your modules and libraries, that would be extremely useful
-feedback.
-
---------
-
-# Stream
-
- Stability: 2 - Unstable
-
-A stream is an abstract interface implemented by various objects in
-Node. For example a request to an HTTP server is a stream, as is
-stdout. Streams are readable, writable, or both. All streams are
-instances of [EventEmitter][]
-
-You can load the Stream base classes by doing `require('stream')`.
-There are base classes provided for Readable streams, Writable
-streams, Duplex streams, and Transform streams.
-
-## Compatibility
-
-In earlier versions of Node, the Readable stream interface was
-simpler, but also less powerful and less useful.
-
-* Rather than waiting for you to call the `read()` method, `'data'`
- events would start emitting immediately. If you needed to do some
- I/O to decide how to handle data, then you had to store the chunks
- in some kind of buffer so that they would not be lost.
-* The `pause()` method was advisory, rather than guaranteed. This
- meant that you still had to be prepared to receive `'data'` events
- even when the stream was in a paused state.
-
-In Node v0.10, the Readable class described below was added. For
-backwards compatibility with older Node programs, Readable streams
-switch into "old mode" when a `'data'` event handler is added, or when
-the `pause()` or `resume()` methods are called. The effect is that,
-even if you are not using the new `read()` method and `'readable'`
-event, you no longer have to worry about losing `'data'` chunks.
-
-Most programs will continue to function normally. However, this
-introduces an edge case in the following conditions:
-
-* No `'data'` event handler is added.
-* The `pause()` and `resume()` methods are never called.
-
-For example, consider the following code:
-
-```javascript
-// WARNING! BROKEN!
-net.createServer(function(socket) {
-
- // we add an 'end' method, but never consume the data
- socket.on('end', function() {
- // It will never get here.
- socket.end('I got your message (but didnt read it)\n');
- });
-
-}).listen(1337);
-```
-
-In versions of node prior to v0.10, the incoming message data would be
-simply discarded. However, in Node v0.10 and beyond, the socket will
-remain paused forever.
-
-The workaround in this situation is to call the `resume()` method to
-trigger "old mode" behavior:
-
-```javascript
-// Workaround
-net.createServer(function(socket) {
-
- socket.on('end', function() {
- socket.end('I got your message (but didnt read it)\n');
- });
-
- // start the flow of data, discarding it.
- socket.resume();
-
-}).listen(1337);
-```
-
-In addition to new Readable streams switching into old-mode, pre-v0.10
-style streams can be wrapped in a Readable class using the `wrap()`
-method.
-
-## Class: stream.Readable
-
-<!--type=class-->
-
-A `Readable Stream` has the following methods, members, and events.
-
-Note that `stream.Readable` is an abstract class designed to be
-extended with an underlying implementation of the `_read(size)`
-method. (See below.)
-
-### new stream.Readable([options])
-
-* `options` {Object}
- * `highWaterMark` {Number} The maximum number of bytes to store in
- the internal buffer before ceasing to read from the underlying
- resource. Default=16kb
- * `encoding` {String} If specified, then buffers will be decoded to
- strings using the specified encoding. Default=null
- * `objectMode` {Boolean} Whether this stream should behave
- as a stream of objects. Meaning that stream.read(n) returns
- a single value instead of a Buffer of size n
-
-In classes that extend the Readable class, make sure to call the
-constructor so that the buffering settings can be properly
-initialized.
-
-### readable.\_read(size)
-
-* `size` {Number} Number of bytes to read asynchronously
-
-Note: **This function should NOT be called directly.** It should be
-implemented by child classes, and called by the internal Readable
-class methods only.
-
-All Readable stream implementations must provide a `_read` method
-to fetch data from the underlying resource.
-
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
-
-When data is available, put it into the read queue by calling
-`readable.push(chunk)`. If `push` returns false, then you should stop
-reading. When `_read` is called again, you should start pushing more
-data.
-
-The `size` argument is advisory. Implementations where a "read" is a
-single call that returns data can use this to know how much data to
-fetch. Implementations where that is not relevant, such as TCP or
-TLS, may ignore this argument, and simply provide data whenever it
-becomes available. There is no need, for example to "wait" until
-`size` bytes are available before calling `stream.push(chunk)`.
-
-### readable.push(chunk)
-
-* `chunk` {Buffer | null | String} Chunk of data to push into the read queue
-* return {Boolean} Whether or not more pushes should be performed
-
-Note: **This function should be called by Readable implementors, NOT
-by consumers of Readable subclasses.** The `_read()` function will not
-be called again until at least one `push(chunk)` call is made. If no
-data is available, then you MAY call `push('')` (an empty string) to
-allow a future `_read` call, without adding any data to the queue.
-
-The `Readable` class works by putting data into a read queue to be
-pulled out later by calling the `read()` method when the `'readable'`
-event fires.
-
-The `push()` method will explicitly insert some data into the read
-queue. If it is called with `null` then it will signal the end of the
-data.
-
-In some cases, you may be wrapping a lower-level source which has some
-sort of pause/resume mechanism, and a data callback. In those cases,
-you could wrap the low-level source object by doing something like
-this:
-
-```javascript
-// source is an object with readStop() and readStart() methods,
-// and an `ondata` member that gets called when it has data, and
-// an `onend` member that gets called when the data is over.
-
-var stream = new Readable();
-
-source.ondata = function(chunk) {
- // if push() returns false, then we need to stop reading from source
- if (!stream.push(chunk))
- source.readStop();
-};
-
-source.onend = function() {
- stream.push(null);
-};
-
-// _read will be called when the stream wants to pull more data in
-// the advisory size argument is ignored in this case.
-stream._read = function(n) {
- source.readStart();
-};
-```
-
-### readable.unshift(chunk)
-
-* `chunk` {Buffer | null | String} Chunk of data to unshift onto the read queue
-* return {Boolean} Whether or not more pushes should be performed
-
-This is the corollary of `readable.push(chunk)`. Rather than putting
-the data at the *end* of the read queue, it puts it at the *front* of
-the read queue.
-
-This is useful in certain use-cases where a stream is being consumed
-by a parser, which needs to "un-consume" some data that it has
-optimistically pulled out of the source.
-
-```javascript
-// A parser for a simple data protocol.
-// The "header" is a JSON object, followed by 2 \n characters, and
-// then a message body.
-//
-// Note: This can be done more simply as a Transform stream. See below.
-
-function SimpleProtocol(source, options) {
- if (!(this instanceof SimpleProtocol))
- return new SimpleProtocol(options);
-
- Readable.call(this, options);
- this._inBody = false;
- this._sawFirstCr = false;
-
- // source is a readable stream, such as a socket or file
- this._source = source;
-
- var self = this;
- source.on('end', function() {
- self.push(null);
- });
-
- // give it a kick whenever the source is readable
- // read(0) will not consume any bytes
- source.on('readable', function() {
- self.read(0);
- });
-
- this._rawHeader = [];
- this.header = null;
-}
-
-SimpleProtocol.prototype = Object.create(
- Readable.prototype, { constructor: { value: SimpleProtocol }});
-
-SimpleProtocol.prototype._read = function(n) {
- if (!this._inBody) {
- var chunk = this._source.read();
-
- // if the source doesn't have data, we don't have data yet.
- if (chunk === null)
- return this.push('');
-
- // check if the chunk has a \n\n
- var split = -1;
- for (var i = 0; i < chunk.length; i++) {
- if (chunk[i] === 10) { // '\n'
- if (this._sawFirstCr) {
- split = i;
- break;
- } else {
- this._sawFirstCr = true;
- }
- } else {
- this._sawFirstCr = false;
- }
- }
-
- if (split === -1) {
- // still waiting for the \n\n
- // stash the chunk, and try again.
- this._rawHeader.push(chunk);
- this.push('');
- } else {
- this._inBody = true;
- var h = chunk.slice(0, split);
- this._rawHeader.push(h);
- var header = Buffer.concat(this._rawHeader).toString();
- try {
- this.header = JSON.parse(header);
- } catch (er) {
- this.emit('error', new Error('invalid simple protocol data'));
- return;
- }
- // now, because we got some extra data, unshift the rest
- // back into the read queue so that our consumer will see it.
- var b = chunk.slice(split);
- this.unshift(b);
-
- // and let them know that we are done parsing the header.
- this.emit('header', this.header);
- }
- } else {
- // from there on, just provide the data to our consumer.
- // careful not to push(null), since that would indicate EOF.
- var chunk = this._source.read();
- if (chunk) this.push(chunk);
- }
-};
-
-// Usage:
-var parser = new SimpleProtocol(source);
-// Now parser is a readable stream that will emit 'header'
-// with the parsed header data.
-```
-
-### readable.wrap(stream)
-
-* `stream` {Stream} An "old style" readable stream
-
-If you are using an older Node library that emits `'data'` events and
-has a `pause()` method that is advisory only, then you can use the
-`wrap()` method to create a Readable stream that uses the old stream
-as its data source.
-
-For example:
-
-```javascript
-var OldReader = require('./old-api-module.js').OldReader;
-var oreader = new OldReader;
-var Readable = require('stream').Readable;
-var myReader = new Readable().wrap(oreader);
-
-myReader.on('readable', function() {
- myReader.read(); // etc.
-});
-```
-
-### Event: 'readable'
-
-When there is data ready to be consumed, this event will fire.
-
-When this event emits, call the `read()` method to consume the data.
-
-### Event: 'end'
-
-Emitted when the stream has received an EOF (FIN in TCP terminology).
-Indicates that no more `'data'` events will happen. If the stream is
-also writable, it may be possible to continue writing.
-
-### Event: 'data'
-
-The `'data'` event emits either a `Buffer` (by default) or a string if
-`setEncoding()` was used.
-
-Note that adding a `'data'` event listener will switch the Readable
-stream into "old mode", where data is emitted as soon as it is
-available, rather than waiting for you to call `read()` to consume it.
-
-### Event: 'error'
-
-Emitted if there was an error receiving data.
-
-### Event: 'close'
-
-Emitted when the underlying resource (for example, the backing file
-descriptor) has been closed. Not all streams will emit this.
-
-### readable.setEncoding(encoding)
-
-Makes the `'data'` event emit a string instead of a `Buffer`. `encoding`
-can be `'utf8'`, `'utf16le'` (`'ucs2'`), `'ascii'`, or `'hex'`.
-
-The encoding can also be set by specifying an `encoding` field to the
-constructor.
-
-### readable.read([size])
-
-* `size` {Number | null} Optional number of bytes to read.
-* Return: {Buffer | String | null}
-
-Note: **This function SHOULD be called by Readable stream users.**
-
-Call this method to consume data once the `'readable'` event is
-emitted.
-
-The `size` argument will set a minimum number of bytes that you are
-interested in. If not set, then the entire content of the internal
-buffer is returned.
-
-If there is no data to consume, or if there are fewer bytes in the
-internal buffer than the `size` argument, then `null` is returned, and
-a future `'readable'` event will be emitted when more is available.
-
-Calling `stream.read(0)` will always return `null`, and will trigger a
-refresh of the internal buffer, but otherwise be a no-op.
-
-### readable.pipe(destination, [options])
-
-* `destination` {Writable Stream}
-* `options` {Object} Optional
- * `end` {Boolean} Default=true
-
-Connects this readable stream to `destination` WriteStream. Incoming
-data on this stream gets written to `destination`. Properly manages
-back-pressure so that a slow destination will not be overwhelmed by a
-fast readable stream.
-
-This function returns the `destination` stream.
-
-For example, emulating the Unix `cat` command:
-
- process.stdin.pipe(process.stdout);
-
-By default `end()` is called on the destination when the source stream
-emits `end`, so that `destination` is no longer writable. Pass `{ end:
-false }` as `options` to keep the destination stream open.
-
-This keeps `writer` open so that "Goodbye" can be written at the
-end.
-
- reader.pipe(writer, { end: false });
- reader.on("end", function() {
- writer.end("Goodbye\n");
- });
-
-Note that `process.stderr` and `process.stdout` are never closed until
-the process exits, regardless of the specified options.
-
-### readable.unpipe([destination])
-
-* `destination` {Writable Stream} Optional
-
-Undo a previously established `pipe()`. If no destination is
-provided, then all previously established pipes are removed.
-
-### readable.pause()
-
-Switches the readable stream into "old mode", where data is emitted
-using a `'data'` event rather than being buffered for consumption via
-the `read()` method.
-
-Ceases the flow of data. No `'data'` events are emitted while the
-stream is in a paused state.
-
-### readable.resume()
-
-Switches the readable stream into "old mode", where data is emitted
-using a `'data'` event rather than being buffered for consumption via
-the `read()` method.
-
-Resumes the incoming `'data'` events after a `pause()`.
-
-
-## Class: stream.Writable
-
-<!--type=class-->
-
-A `Writable` Stream has the following methods, members, and events.
-
-Note that `stream.Writable` is an abstract class designed to be
-extended with an underlying implementation of the
-`_write(chunk, encoding, cb)` method. (See below.)
-
-### new stream.Writable([options])
-
-* `options` {Object}
- * `highWaterMark` {Number} Buffer level when `write()` starts
- returning false. Default=16kb
- * `decodeStrings` {Boolean} Whether or not to decode strings into
- Buffers before passing them to `_write()`. Default=true
-
-In classes that extend the Writable class, make sure to call the
-constructor so that the buffering settings can be properly
-initialized.
-
-### writable.\_write(chunk, encoding, callback)
-
-* `chunk` {Buffer | String} The chunk to be written. Will always
- be a buffer unless the `decodeStrings` option was set to `false`.
-* `encoding` {String} If the chunk is a string, then this is the
- encoding type. Ignore chunk is a buffer. Note that chunk will
- **always** be a buffer unless the `decodeStrings` option is
- explicitly set to `false`.
-* `callback` {Function} Call this function (optionally with an error
- argument) when you are done processing the supplied chunk.
-
-All Writable stream implementations must provide a `_write` method to
-send data to the underlying resource.
-
-Note: **This function MUST NOT be called directly.** It should be
-implemented by child classes, and called by the internal Writable
-class methods only.
-
-Call the callback using the standard `callback(error)` pattern to
-signal that the write completed successfully or with an error.
-
-If the `decodeStrings` flag is set in the constructor options, then
-`chunk` may be a string rather than a Buffer, and `encoding` will
-indicate the sort of string that it is. This is to support
-implementations that have an optimized handling for certain string
-data encodings. If you do not explicitly set the `decodeStrings`
-option to `false`, then you can safely ignore the `encoding` argument,
-and assume that `chunk` will always be a Buffer.
-
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
-
-
-### writable.write(chunk, [encoding], [callback])
-
-* `chunk` {Buffer | String} Data to be written
-* `encoding` {String} Optional. If `chunk` is a string, then encoding
- defaults to `'utf8'`
-* `callback` {Function} Optional. Called when this chunk is
- successfully written.
-* Returns {Boolean}
-
-Writes `chunk` to the stream. Returns `true` if the data has been
-flushed to the underlying resource. Returns `false` to indicate that
-the buffer is full, and the data will be sent out in the future. The
-`'drain'` event will indicate when the buffer is empty again.
-
-The specifics of when `write()` will return false, is determined by
-the `highWaterMark` option provided to the constructor.
-
-### writable.end([chunk], [encoding], [callback])
-
-* `chunk` {Buffer | String} Optional final data to be written
-* `encoding` {String} Optional. If `chunk` is a string, then encoding
- defaults to `'utf8'`
-* `callback` {Function} Optional. Called when the final chunk is
- successfully written.
-
-Call this method to signal the end of the data being written to the
-stream.
-
-### Event: 'drain'
-
-Emitted when the stream's write queue empties and it's safe to write
-without buffering again. Listen for it when `stream.write()` returns
-`false`.
-
-### Event: 'close'
-
-Emitted when the underlying resource (for example, the backing file
-descriptor) has been closed. Not all streams will emit this.
-
-### Event: 'finish'
-
-When `end()` is called and there are no more chunks to write, this
-event is emitted.
-
-### Event: 'pipe'
-
-* `source` {Readable Stream}
-
-Emitted when the stream is passed to a readable stream's pipe method.
-
-### Event 'unpipe'
-
-* `source` {Readable Stream}
-
-Emitted when a previously established `pipe()` is removed using the
-source Readable stream's `unpipe()` method.
-
-## Class: stream.Duplex
-
-<!--type=class-->
-
-A "duplex" stream is one that is both Readable and Writable, such as a
-TCP socket connection.
-
-Note that `stream.Duplex` is an abstract class designed to be
-extended with an underlying implementation of the `_read(size)`
-and `_write(chunk, encoding, callback)` methods as you would with a Readable or
-Writable stream class.
-
-Since JavaScript doesn't have multiple prototypal inheritance, this
-class prototypally inherits from Readable, and then parasitically from
-Writable. It is thus up to the user to implement both the lowlevel
-`_read(n)` method as well as the lowlevel `_write(chunk, encoding, cb)` method
-on extension duplex classes.
-
-### new stream.Duplex(options)
-
-* `options` {Object} Passed to both Writable and Readable
- constructors. Also has the following fields:
- * `allowHalfOpen` {Boolean} Default=true. If set to `false`, then
- the stream will automatically end the readable side when the
- writable side ends and vice versa.
-
-In classes that extend the Duplex class, make sure to call the
-constructor so that the buffering settings can be properly
-initialized.
-
-## Class: stream.Transform
-
-A "transform" stream is a duplex stream where the output is causally
-connected in some way to the input, such as a zlib stream or a crypto
-stream.
-
-There is no requirement that the output be the same size as the input,
-the same number of chunks, or arrive at the same time. For example, a
-Hash stream will only ever have a single chunk of output which is
-provided when the input is ended. A zlib stream will either produce
-much smaller or much larger than its input.
-
-Rather than implement the `_read()` and `_write()` methods, Transform
-classes must implement the `_transform()` method, and may optionally
-also implement the `_flush()` method. (See below.)
-
-### new stream.Transform([options])
-
-* `options` {Object} Passed to both Writable and Readable
- constructors.
-
-In classes that extend the Transform class, make sure to call the
-constructor so that the buffering settings can be properly
-initialized.
-
-### transform.\_transform(chunk, encoding, callback)
-
-* `chunk` {Buffer | String} The chunk to be transformed. Will always
- be a buffer unless the `decodeStrings` option was set to `false`.
-* `encoding` {String} If the chunk is a string, then this is the
- encoding type. (Ignore if `decodeStrings` chunk is a buffer.)
-* `callback` {Function} Call this function (optionally with an error
- argument) when you are done processing the supplied chunk.
-
-Note: **This function MUST NOT be called directly.** It should be
-implemented by child classes, and called by the internal Transform
-class methods only.
-
-All Transform stream implementations must provide a `_transform`
-method to accept input and produce output.
-
-`_transform` should do whatever has to be done in this specific
-Transform class, to handle the bytes being written, and pass them off
-to the readable portion of the interface. Do asynchronous I/O,
-process things, and so on.
-
-Call `transform.push(outputChunk)` 0 or more times to generate output
-from this input chunk, depending on how much data you want to output
-as a result of this chunk.
-
-Call the callback function only when the current chunk is completely
-consumed. Note that there may or may not be output as a result of any
-particular input chunk.
-
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
-
-### transform.\_flush(callback)
-
-* `callback` {Function} Call this function (optionally with an error
- argument) when you are done flushing any remaining data.
-
-Note: **This function MUST NOT be called directly.** It MAY be implemented
-by child classes, and if so, will be called by the internal Transform
-class methods only.
-
-In some cases, your transform operation may need to emit a bit more
-data at the end of the stream. For example, a `Zlib` compression
-stream will store up some internal state so that it can optimally
-compress the output. At the end, however, it needs to do the best it
-can with what is left, so that the data will be complete.
-
-In those cases, you can implement a `_flush` method, which will be
-called at the very end, after all the written data is consumed, but
-before emitting `end` to signal the end of the readable side. Just
-like with `_transform`, call `transform.push(chunk)` zero or more
-times, as appropriate, and call `callback` when the flush operation is
-complete.
-
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
-
-### Example: `SimpleProtocol` parser
-