Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DocPad Database Architecture Vision #705

Closed
balupton opened this issue Nov 12, 2013 · 36 comments
Closed

DocPad Database Architecture Vision #705

balupton opened this issue Nov 12, 2013 · 36 comments

Comments

@balupton
Copy link
Member

In relation to #445 (abstract all the things) and my comment here about a possible abstraction architecture #702 (comment)

I'd like to use this issue as way to implement the goal of abstracting out the DocPad architecture, while at the same time addressing our performance and memory issues, and allowing us to evolve to the demands of the future.

I've outlined the following architecture already, that being:

  • Core Experience
    • DocPad CLI Module
    • DocPad Core Module
      • DocPad Generate Module
      • DocPad Plugin Module
      • DocPad File Model & Collection Module
      • DocPad Block Model & Collection Module
    • DocPad Watching Plugin
    • DocPad Growl Plugin
  • Extra Experience
    • Renderer Plugins
    • Helper Plugins
    • Source Plugins (formerly importers)
    • Interface Plugins

Tying this together, it would look something like this:

jmmz

qltu

The parts on the right being source plugins, that serve pulling in different sources of content (formerly called importers), and keeping them up to date. The decision for the rename is that the name "importers" reflects only a one way data transfer, whereas with this new architecture, the source plugins would serve as bidirectional data transfer. Import into DocPad, and export into the original source.

The parts on the left being renderer plugins, which serve as modifying content during the generation phase.

I feel we can have a very basic file model, which event system serves as a mediator that DocPad then listens to, to perform generations and saves.

We should be able to abstract this in such a way that other static site generators, content management systems, and generic node modules can use this. As well as plugins would be written against the content database, rather than against DocPad, but with DocPad providing extra features and conventions etc to make things easier.

@0xgeert
Copy link

0xgeert commented Nov 12, 2013

How about an api-endpoint as part of the Core experience to push CUD
documents to docpad? This could nicely sit beside the Watch Plugin and
share the same processing flow. Advantage being the option to connect
arbitrary external systems as producers.

On Tue, Nov 12, 2013 at 7:51 AM, Benjamin Arthur Lupton <
notifications@github.com> wrote:

In relation to #445 #445 all the things) and my comment here about a possible abstraction
architecture #702 (comment)#702 (comment)

I'd like to use this issue as way to implement the goal of abstracting out
the DocPad architecture, while at the same time addressing our performance
and memory issues, and allowing us to evolve to the demands of the future.

I've outlined the following architecture already, that being:

  • Core Experience
    • DocPad CLI Module
    • DocPad Core Module
      • DocPad Generate Module
      • DocPad Plugin Module
      • DocPad File Model & Collection Module
      • DocPad Block Model & Collection Module
        • DocPad Watching Plugin
    • DocPad Growl Plugin
      • Extra Experience
    • Renderer Plugins
    • Helper Plugins
    • Source Plugins (formerly importers)
    • Interface Plugins

Tying this together, it would look something like this:

[image: Sketch of DocPad Architecture Vision]https://github-camo.global.ssl.fastly.net/5f8d78ade567f611688a8f6b4f721d49287ff3d8/687474703a2f2f642e70722f662f514c74752b

The parts on the right being source plugins, that serve pulling in
different sources of content (formerly called importers), and keeping them
up to date. The decision for the rename is that the name "importers"
reflects only a one way data transfer, whereas with this new architecture,
the source plugins would serve as bidirectional data transfer. Import into
DocPad, and export into the original source.

The parts on the left being renderer plugins, which serve as modifying
content during the generation phase.

I feel we can have a very basic file model, which event system serves as a
mediator that DocPad then listens to, to perform generations and saves.

We should be able to abstract this in such a way that other static site
generators, content management systems, and generic node modules can use
this. As well as plugins would be written against the content database,
rather than against DocPad, but with DocPad providing extra features and
conventions etc to make things easier.


Reply to this email directly or view it on GitHubhttps://github.com//issues/705
.

@balupton
Copy link
Member Author

@gebrits good idea, can you provide more of an example?

Would this be like the restapi plugin or like #543

@0xgeert
Copy link

0xgeert commented Nov 13, 2013

@balupton: the restapi plugin and #543 (the stuff on the the top in moss
green) seem to be overlapping at least the push part of it (i.e: from
express, geddy, etc TO docpad, and not the other way around), correct? But
indeed that's essentially what I'm thinking about. It would allow all sorts
of editor/Ide frontends to push documents to Docpad and have Docpad produce
the static html from it.

probably superfluous but: to do this efficiently I guess there needs to be
some notion of a dependency graph between documents (and other assets) .
I.e: which document changes (may) influence changes in other docs. This
could be used to calculate the smallest subset of documents that would have
to be reprocessed when a create/update/delete comes in. I could elaborate
if I'm not making sense.

On Wed, Nov 13, 2013 at 12:24 AM, Benjamin Arthur Lupton <
notifications@github.com> wrote:

@gebrits https://github.com/gebrits good idea, can you provide more of
an example?

Would this be like the restapi plugin http://docpad.org/p/restapi or
like #543 #543


Reply to this email directly or view it on GitHubhttps://github.com//issues/705#issuecomment-28344407
.

@pflannery
Copy link
Contributor

Continuing from #709

What's your thoughts on applying an interception technique to the data model?

@pflannery
Copy link
Contributor

Thought I would add a quick representation of how interception could be used
This isn't anything real just a pretend object graph example

DocumentModel

    # Optional cached layer on a per model basis
    CachedModel (optional caching interceptor)

        # Optional logging layer
        ModelLogging (optional logging interceptor)

        # servicer's interceptor layer
            FileModel (Uses the CRUD Interface)

            GitModel (Uses the CRUD Interface)

            TumbrlModel (Uses the CRUD Interface)

            StockMarketTickerModel (Uses the CRUD Interface)

            Storage Model

                # storage servicer's
                LocalDBModel (Uses the CRUD Interface)

                MirrorApiModel (Uses the CRUD Interface)

    # well you get the picture

Typical CRUD Interface
    create
    read
    update
    delete

The graph above would be quite powerful allowing data to be dynamic, stored locally, stored externally, mirrored, read in from one and then write to another etc..etc..

Another thing goes with kind of pattern and would help create optimised V8 code

@balupton
Copy link
Member Author

@gebrits

probably superfluous but: to do this efficiently I guess there needs to be
some notion of a dependency graph between documents (and other assets) .
I.e: which document changes (may) influence changes in other docs. This
could be used to calculate the smallest subset of documents that would have
to be reprocessed when a create/update/delete comes in. I could elaborate
if I'm not making sense.

Is this what you mean? #336

@0xgeert
Copy link

0xgeert commented Nov 19, 2013

@balupton: yes indeed: Having a map of doc --refs --> [doc] instead of the current referenceOthers: true|false .

Once that's in place an intelligent (and optimal) algo for #359 'trivially' becomes a Toplogical Sort http://en.wikipedia.org/wiki/Topological_sorting

On Tue, Nov 19, 2013 at 5:33 PM, Benjamin Arthur Lupton <
notifications@github.com> wrote:

@gebrits https://github.com/gebrits

probably superfluous but: to do this efficiently I guess there needs to be
some notion of a dependency graph between documents (and other assets) .
I.e: which document changes (may) influence changes in other docs. This
could be used to calculate the smallest subset of documents that would have
to be reprocessed when a create/update/delete comes in. I could elaborate
if I'm not making sense.

Is this what you mean? #336 #336


Reply to this email directly or view it on GitHubhttps://github.com//issues/705#issuecomment-28805340
.

@balupton
Copy link
Member Author

For anyone keen, let's try for a Google Hangout on Air tomorrow. Sometime from 11am to 5pm Berlin Time works for me. Let me know what time in there works for each of you, and we'll lock in the most common time by 10am.

Windows of availability:

@greduan
Copy link
Contributor

greduan commented Nov 19, 2013

Tomorrow anything after 7:00 and before 16:00 is good for me. Mexico City time.

In Berlin Time it would be 14:00 to 23:00.

According to http://www.timebie.com/timezone/berlinmexicocity.php

@0xgeert
Copy link

0xgeert commented Nov 19, 2013

14:00 - 17:00 Berlin Time would works for me.

On Tue, Nov 19, 2013 at 7:15 PM, Eduán Lávaque notifications@github.comwrote:

Tomorrow anything after 7:00 and before 16:00 is good for me. Mexico City
time.

In Berlin Time it would be 14:00 to 23:00.


Reply to this email directly or view it on GitHubhttps://github.com//issues/705#issuecomment-28816866
.

@greduan
Copy link
Contributor

greduan commented Nov 19, 2013

@balupton Add one more hour of availability. I am now available till 17:00 which is 24:00 in Berlin time.

@balupton
Copy link
Member Author

Cool. Let's do 3pm/15:00 then :)

@balupton
Copy link
Member Author

Agenda for the meeting.

The importing process is slow, we need to fix this.

This is the process:

  1. DocPad emits populateCollections
    1. Tumblr plugin requests latest data from Tumblr API (with caching)
    2. Tumblr plugin gets results back
    3. Tumblr plugin injects results into DocPad models
    4. Tumblr plugin fires event completion callback
  2. DocPad initial generate starts
    1. DocPad parses+contextualizes tumblr models
    2. DocPad renders tumblr documents
    3. DocPad writes tumblr documents

The slowness comes from every step besides 2.3. However the majority of the slowness comes from 1.3 until 2.3, as the 1.1 and 1.2 is actually really fast due to the caching of data.

We can speed up time to generation by {having data pulled in in the background}:

  • A. Calling step 1.4 right away, and having steps 1.1-1.3 occur in the background, calling a generate action once completed
    • This needs to be turned off when doing one-off compiles, such as docpad generate (as it would cause one-off compiles to have incomplete data), and should only be enabled when doing docpad run

We can speed up injection time by {caching tumblr documents as physical files}:

  • B. Having the tumblr plugin write the documents to the source directory by using the writeSource: once header.
    • This suggestion requires A to be implemented, otherwise it will be just as slow.
    • This suggestion requires that the tumblr plugin updates the existing models, rather than adding duplicates.

We can speed up rendering time by {going directly to out files at the start}:

  • C. Turning off DocPad's HTTP request delay when we are doing the initial generation to have the ability to serve the out directory via the static middleware
    • This needs to be opt-in, as it can cause havoc on dynamic websites

We can speed up the entire process by {caching the entire database}:

  • D. Injecting a cache step between 2.2 and 2.3. That caches the rendered results to an external database, and loads up that database on initial load.
    • This means that rather than doing initial generations each time, we do one initial generation, cache the result, then load up the cache each time.
    • This would have significant performance improvements for all DocPad sites.
    • A question here, is whether or not this caching should be in the core, or whether it should be a plugin.

We can speed up generating and rendering by only rendering documents as we need them:

  • E. When doing a long-running DocPad instance, documents are only rendered when they are requested.

@0xgeert
Copy link

0xgeert commented Nov 20, 2013

Sorry about this, but something came up so I can't make it.
Some quick thoughts:

  • D. seems like low-hanging fruit. All for it. (Although I'm really not up
    to par with you guys on the internals of Docpad obviously)
  • In between 2.1 and 2.2 there's probably the stuff I was talking about on
    keeping a cache of docs referencing eachother. Doing this together with D.
    would enable to only do 2.2 and 2.3 on the needed subset, i.e the
    (transitively) changed docs.

On Wed, Nov 20, 2013 at 1:02 PM, Benjamin Arthur Lupton <
notifications@github.com> wrote:

Agenda for the meeting.

The importing process is slow, we need to fix this.

This is the process:

  1. DocPad emits populateCollections
    1. Tumblr plugin requests latest data from Tumblr API (with caching)
    2. Tumblr plugin gets results back
    3. Tumblr plugin injects results into DocPad models
    4. Tumblr plugin fires event completion callback
      1. DocPad initial generate starts
    5. DocPad parses+contextualizes tumblr models
    6. DocPad renders tumblr documents
    7. DocPad writes tumblr documents

The slowness comes from every step besides 2.3.

We can speed up time to generation by:

  • A. Calling step 1.4 right away, and having steps 1.1-1.3 occur in
    the background, calling a generate action once completed

We can speed up injection time by:

  • B. Having the tumblr plugin write the documents to the source
    directory by using the writeSource: once header.
    • This suggestion requires A to be implemented, otherwise it will
      be just as slow.
    • This suggestion requires that the tumblr plugin updates the
      existing models, rather than adding duplicates.

We can speed up rendering time by:

  • C. Turning off DocPad's HTTP request delay when we are doing the
    initial generation to have the ability to serve the out directory via
    the static middleware
    • This needs to be opt-in, as it can cause havoc on dynamic websites

We can speed up the entire process by:

  • D. Injecting a cache step between 2.2 and 2.3. That caches the
    rendered results to an external database, and loads up that database on
    initial load.
    • This means that rather than doing initial generations each time,
      we do one initial generation, cache the result, then load up the cache each
      time.
    • This would have significant performance improvements for all
      DocPad sites.


Reply to this email directly or view it on GitHubhttps://github.com//issues/705#issuecomment-28884118
.

@pflannery
Copy link
Contributor

wow is google hangouts that bad or was the "join hangout" disabled? I sat here for over 30 mins trying to get involved with no luck.

@greduan
Copy link
Contributor

greduan commented Nov 20, 2013

@pflannery I think you needed to be invited to join the call. Ben didn't invite you because you didn't confirm your attendance I believe. Sorry! :(

@balupton
Copy link
Member Author

It's up here: http://www.youtube.com/watch?v=560IGREBD-w

@pflannery Yeah, I have it so that only those in my "DocPad" circle can join, as otherwise we could have complete randoms joining. Unfortunately, I didn't know that you were attending until after the meetup, as I closed down twitter right after my messages with eduan to make sure the call speed was fast.

Really sorry about that mixup. Wished you were there. You are now on the circle. So will receive an invite next time. Will also make sure to keep twitter open.

@pflannery
Copy link
Contributor

@balupton ah I see. thanks

@pflannery
Copy link
Contributor

Just wanted to answer some of your questions you asked about ECS.

ECS is a runtime polymorphic pattern.

To translate ECS to the familiar land of OO:

Entities are your objects\class instances, 
Components are the properties of an Entity Instance and 
System(s) are the behaviour\methods for the entities and\or components

Instead of storing Component (property) data on the entity (class) directly, 
Components are stored in their own lists. 
Under the hood Entities are merely id\key lookups in to the component lists.

Systems don't listen for events instead 
they generally poll the component list(s) and 
if the component(s) exist then do some behaviour.
Async systems are possible.

ECS isn't anti-OO if fact it embraces OO. 
You can still have\create builders, factories, adapters etc (all very useful assistants)

The amazing thing about the ECS pattern is that an entity can inherit any System behaviour at runtime.
i.e. a TreeEntity could inherit the BirdBehaviour or the BirdEntity could inherit the TreeBehaviour and so on..

On the note of using it for application other than games:

I've had plans to make a data service using ECS but haven't got around to it.
The challenges involved are that games have a continuous loop always executing each system which in turn allows state change detection per iteration (forever refreshing and updating its state).

So the questions are:
-should an app\server be running a continuous loop?
-or maybe it would work on a request basis ( turn by turn) and only iterate until there are no more changes.

Well anyway I could rattle for hours. But I've got ideas for making this happen but for me it's currently all theory and its fascinating as ECS is awesome.

@greduan
Copy link
Contributor

greduan commented Nov 20, 2013

Thanks for clarifying ECS for Ben @pflannery. :)
It is much harder to explain it when speaking, trust me. lol

I consider it to be a very easy process to visualize in my head how the whole DocPad architecture would work. But putting it on paper or writing it is a little more complicated.

I'll share some kind of idea of how things would work with an ECS architecture later this week or next week, I gotta figure it out exactly and write it down first.

@pflannery
Copy link
Contributor

@greduan haha fair enough...I know what you mean, at first I was going to give examples that fitted Docpad then realised it would take too much time just to post a small comment on github..

Look forward to seeing your idea's

@balupton
Copy link
Member Author

I forgot another option for speeding up importing:

We can speed up generating and rendering by only rendering documents as we need them:

  • E. When doing a long-running DocPad instance, documents are only rendered when they are requested.

@greduan
Copy link
Contributor

greduan commented Nov 22, 2013

That's a viable option. It would mean the user would have to wait a little more but it works.

@pflannery
Copy link
Contributor

Yeah cool. I would like the option of on-demand rendering, then on top of this we could have

  • a lifetime value to keep rendered contents cached
  • when the lifetime value expires it reverts back to an on-demand state
  • a lifetime value of 0 would keep the document constantly on-demand
  • when the session lifetime expires it would reset all the lifetimes of the entire on-demand-cache

@balupton
Copy link
Member Author

I've got a lot of the ground work done for this on the dev-speed branch, and now attempting to get the database to cache to a file. Ran into a problem, so will continue work on this tomorrow.

@pflannery
Copy link
Contributor

cool.

wow where did ".docpad.db" come from?

@balupton
Copy link
Member Author

wow where did ".docpad.db" come from?

Awesome isn't it? :D It's the new database cache file discussed in option D.


I've now pushed up the changes to the branch dev-speed and would love for people to try it out before I publish the version tomorrow. In case it breaks anything.

Changelog: https://github.com/bevry/docpad/blob/dev-speed/History.md#readme

To give it a go:

npm install -g coffee-script
cd ~
git clone https://github.com/bevry/docpad.git
cd docpad
git checkout dev-speed
cake compile
npm link

Then inside one of your projects:

docpad update
docpad run --global

Notes:

  • Turn off all writeSource headers, you can now consider this option deprecated. It will cause duplicated documents. Delete any old writeSource cached documents in your src directory. If this is a major problem, let me know. The reasoning is that the database cache serves the same goal as writeSource has, and the database cache is way faster with less complexity. So we'll be dropping support for the writeSource header if there are no critical objections.

Sweet, looking forward to the feedback. On our side, this is producing huge performance improvements with initial generations as well as subsequent generations. Though I'm keen to hear how it goes with more real world usage.

@Delapouite
Copy link
Contributor

I need to investigate why, but it didn't work in my case.

Here's the 54.10 output

┌─────────────────────┬────────────┬────────────┐
│ event               │ time in ms │ percentage │
├─────────────────────┼────────────┼────────────┤
│ serverAfter         │ 60         │ 0%         │
├─────────────────────┼────────────┼────────────┤
│ populateCollections │ 19884      │ 6%         │
├─────────────────────┼────────────┼────────────┤
│ parseAfter          │ 60126      │ 18%        │
├─────────────────────┼────────────┼────────────┤
│ contextualizeAfter  │ 17308      │ 5%         │
├─────────────────────┼────────────┼────────────┤
│ renderAfter         │ 197625     │ 60%        │
├─────────────────────┼────────────┼────────────┤
│ writeAfter          │ 17976      │ 5%         │
├─────────────────────┼────────────┼────────────┤
│ Total               │ 331118     │            │
└─────────────────────┴────────────┴────────────┘

info: Generated 2475/2475 files in 330.526 seconds

6.55 (dev-speed) 1st run

┌─────────────────────┬────────────┬────────────┐
│ event               │ time in ms │ percentage │
├─────────────────────┼────────────┼────────────┤
│ serverAfter         │ 41         │ 0%         │
├─────────────────────┼────────────┼────────────┤
│ populateCollections │ 22394      │ 6%         │
├─────────────────────┼────────────┼────────────┤
│ parseAfter          │ 59114      │ 17%        │
├─────────────────────┼────────────┼────────────┤
│ contextualizeAfter  │ 17886      │ 5%         │
├─────────────────────┼────────────┼────────────┤
│ renderAfter         │ 210634     │ 61%        │
├─────────────────────┼────────────┼────────────┤
│ writeAfter          │ 19513      │ 6%         │
├─────────────────────┼────────────┼────────────┤
│ Total               │ 347813     │            │

info: Generated 2475/2475 files in 350.087 seconds

2nd run, no file added or changed :

info: Generating...
info: Generated 0/2475 files in 32.894 seconds

3rd run, one file added :

info: Generating...
info: Generated 0/2475 files in 33.37 seconds

The problem is that the new file was not detected.
.docpad.db is created and is about 20Mo

@balupton
Copy link
Member Author

The problem is that the new file was not detected.

To be clear. That is the only problem?

If so, I know how to address that. I had turned off the directory scanning if the database cache exists, on the initial generation, but that use case proves that it is still necessary. I'll make an amendment and let you know once it's pushed up.

@Delapouite
Copy link
Contributor

Yes for my first test that was the only issue. But since it was blocking I stopped.

but that use case proves that it is still necessary

My use case? Well isn't that pretty common?
You generate a static website, turn off your computer, the next day add new documents and regenerate it again?

Your sentence confirms what I was afraid of ; DocPad is not a static website generator anymore but now tends to focus more and more on the server delivery stuffs, with a "generated when requested" paradigm.

NginX does the job pretty well to deliver static pre-rendered assets, I regret that the server component of DocPad became such a prominent part of it version after version.

Don't get me wrong, it's not a rant. I still enjoy the project and deeply respect what has been done, I just feel that recent evolution forgot the original goals.

@balupton
Copy link
Member Author

My use case? Well isn't that pretty common?
You generate a static website, turn off your computer, the next day add new documents and regenerate it again?

This is true, and is a common use case. Sorry that my words made it seem it was not so. My intention had not been to indicate it's commonness, or uncommonness, but rather that it is a use case, that we should support, and serve as a reminder for myself to add a note to the codebase when I make the amendment to the code base. Hope that clears it up.

Your sentence confirms what I was afraid of ; DocPad is not a static website generator anymore but now tends to focus more and more on the server delivery stuffs, with a "generated when requested" paradigm.

Hrmmm. Perhaps there may have been a confusion about what the "generate when requested" paradigm is about. It certainly isn't a way to put the static site generation on the back burner, but rather only way so far that we have come up with, that solves the memory problem that static site generators face. If you have 10GB of data, over 10,000 files, static site generators can't deal with that, or if they can, then they do so very slowly or will crash as soon as they exceed the available memory of memory-limited server-host architectures (like heroku).

Don't get me wrong, it's not a rant. I still enjoy the project and deeply respect what has been done, I just feel that recent evolution forgot the original goals.

Yeah, perhaps I haven't been that clear about the way we are tackling the DocPad growing pains in recent times, and why we need to. DocPad a year ago, was a static site generator, and a good one at that, however, DocPad a year ago was only a platform for developers, and only for small static sites. Which are imposed theoretical limitations, rather than practical technical ones. We can solve the speed problems by the solutions identified here, and we can solve the market problem by exposing dynamic abilities that enable 3rd party creation experiences. These are scaling issues in a way, or rather growing pains, of a project which market share is naturally growing and expanding to new and great sustainable levels.

The DocPad eco-system is greater than ever, the community highly engaged, and the demands even stronger than before. It may seem like a tough balance of staying true to developers verses expanding the market reach, although, I feel very strongly, this balance issue, isn't only a perceived one. DocPad is always a tool for developer experience, and through this natural evolution, will stay true to that, while opening more doors, further. We started off with a single door leading to an amazing developer experience room, and now we're expanding to build the other rooms surrounding it. It's exciting, but that original door leading to the developer experience room, is still there, and becoming more mature as the days pass, despite what is happening in the other rooms.

Let's continue this in another issue if need be :)

@balupton
Copy link
Member Author

DocPad v6.55.0 now out :) So glad this is done.

The only gotcha is that removed files while DocPad is not done will not get picked up, this required a docpad clean. Will fix this up in a later release.

More info here: https://github.com/bevry/docpad/blob/master/History.md#readme

@mikeumus
Copy link
Member

Pardon I didn't read this completely as I go through cleaning up issues but

Is this still relevant with our refactoring efforts or is this something that should end up in a @loomio ?
Basically, can I clean this up? @balupton

@balupton
Copy link
Member Author

This is best served by a vision document.

@almereyda
Copy link

Regarding generated when requested (like Harp), maybe DocPad can even be added to https://unhosted.org/tools/ and http://nobackend.org/solutions.html, but just because it can also act as a static generator that makes no assumptions.

But as stated in the closely related issue docpad-archive/meta-gui#20, this vision document sounds appealing to me. And I believe it goes closely together with the discussion there.

It also links to the discourses around organic computing, atomic design, Loomio's Enspiral Open App Ecosystems and consensus mechanisms like Raft and somewhat Serf's SWIM implementation, if we think all modular components as being autonomous agents that will be spawned and destroyed as needed.

So how does all that sound together facing the need for new Discoursive Principles of our societies? And in relation to a database architecture?

@balupton
Copy link
Member Author

balupton commented Jun 18, 2016

As per https://discuss.bevry.me/t/deprecating-in-memory-docpad-importers-exporters/591?u=balupton this issue is now outside the scope of DocPad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants