-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Live binding integration layer #8
Comments
Well frankly this is super exciting. (!) I have a few concerns:
So, having said those things, let's try it! If I'm understanding you correctly, the code snippet you posted is working right now? Are you using the new helper api / system I introduced in Vash ~0.5 for buffering, or something you rolled on your own? In a perfect world, what kind of API and functionality are you expecting or think would be best? It might be a good place to start, given that you have a working example that uses the current system with relatively verbose syntax. I could see a special-cased version of your snippet:
Where if you don't pass a callback/content block, it assumes you only want to output the changed value without extra markup. But I haven't used live binding enough to know the common use cases. |
I'm using the new buffer helper system, yes. It was an (almost) perfect for this. My changes are not yet published though: The current code I have is still strongly tied to the CanJS implementation of observable models and that first needs to be decoupled. I just wanted to gauge your interest before investing time in doing so. I'm not envisioning much additional API above what you are already suggesting: a simplified function signature which assumes you only output the changed value. KISS is very important here, imho. Currently I use the buffer helper to wrap content in a The neat part is: this all works without any real core changes to Vash. The only change is explicitly using A buffered helper could push a new buffer onto the stack, do whatever it needs to do and have it collate into this new buffer, pop the new buffer off the stack, join it as a string and then push the joined string onto the original buffer. Fairly easy, but a lot more clean. The not-so-neat part is the additional span. We'd need to actually dive a lot further into the AST to find out if there is already an HTML tag present onto which we can latch an ID for the live binding. I'm still investigating feasability here. The company I work for has opened up an organization account w/ Github to share feature improvements and bug fixes back to the community. I'll be sending you a pull request as soon as I get the time to straighten out the code and get everything submitted. (Hopefully will find the time to do so over the evenings this coming week.) |
I'm not sure what you mean by a stack of If I'm understanding correctly, what you're talking about doing is similar to what the highlight helper demo does using Regarding the additional span, that's a lot more difficult, because at the time the helpers are called, there is no AST, just entries in the buffer. In addition, the HTML parsing is minimal, meaning an AST node that represents an opening html tag is simply a string of html (no DOM-like things like attributes, className, id, etc). Off the top of my head, there is a way around all of this, but it's a little hacky. One idea is for vQuery to provide a way to serialize an AST, so it can be sent client-side, reconstituted into a live structure, and then registered as the AST for a particular template. In the compiler, when a node is visited, it could attach a unique ID to each node. There would also need to be instrumentation like Keep in mind that adding ids to existing html tags is pretty error prone, because an id may already be defined... might want to think about I just had another idea...
|
Regarding the stack of it's very similar to the current situation with grabbing an index with E.g.
But then I realized that this conflicts with the regular push/pop semantics of a JS array for appending/removing items and that it would limit your post-processing options in adding additional HTML before/after the piece of buffer handled by the helper, so: meh, probably best to just keep using the Regarding the additional wrapping HTML elements: I was already aiming for the straight string of HTML and always wrapping in a new tag for simplicity's sake. It's just that there could be issues with requiring a Also, if you add a semi-unique namespaced prefix onto an auto-incrementing number you should get a reasonably safe ID that shouldn't lead to duplicates. Something like Maybe we could circumvent the inline/block dillema by generating empty boundary indicators such as: <span id="vash-live-start-27"></span>
<div>Original markup that should be live-bound goes here</div>
<div>And here</div>
<span>Or maybe here</span>
<span id="vash-live-end-27"></span> Then leave construction of the proper DOM replacement logic (matching and replacing tags between these boundaries) up to the third-party integration layer for whatever (pairing of) MV* and DOM library you are using clientside. E.g. with jQuery you get something like: $( "#vash-live-start-27" )
.nextUntil( "#vash-live-end-27" )
.remove()
.end()
.append( html ); Regarding separation of DOM manipulation & model implementations from Vash: I did a bit of sketching and after some more thought, I think I have some ideas on how to separate the required event binding to models and DOM manipulations for insertion of live re-rendered content from the actual templating/rendering logic handled by Vash. Basically; The agreed upon contract for
And somewhere a literal would be built up and stored that would structurally resemble: {
"vash-live-27" : { model : {...} , prop : "propA" , renderer : fn( model ){ ... }},
"vash-live-28" : { model : {...} , prop : "propB" , renderer : fn( model ){ ... }}
} This kind of structure could be used to set up the live model bindings externally from Vash. CanJS, Backbone, Knockout or any other observable models framework can be used to listen for changes of property This should also keep Vash easily unit-testable in Node as it won't need to concern itself with events and observable models, or DOM manipulations. All it should do is expose the above structure somewhere for a third party to pick up and integrate live bindings on. I'm still thinking of a good publicly reachable place to fit these produced mappings on individual template instances... |
Ok, let's leave the buffer API as is for now then. Regarding Wrapping Inserting tags as "markers" seems fragile, especially given that jQuery snippet of Have you considered using a custom element? Something like This is only an issue with "complex" bindings, since simple (shorthand) bindings could be fairly-certainly wrapped with a span tag. XPath might be another alternative to avoid inserting elements (Firebug has a good implementation). Regarding @html.live What do you need from Vash/me for this? It seems like it's possible to implement the whole thing using a helper. As to where to store this structure, The "connector" to other libraries still seems like an issue. Something needs to know about the other, which means some kind of glue. As I think about it more, this sounds like a relatively complex bit of code, even with the bindings exposed as you suggested. I was thinking of how this would work in a no-fuss way with Backbone, and an answer did not immediately come to mind. I assume you have an implementation for CanJS? |
Your suggestion to use custom tags is something I've also considered. Internet Explorer would still have problems with it when combined with setting content through the We could also use xml namespacing (e.g. Another option we have is to allow explicit marking of the element that will serve as the live bound container. E.g. you could do something like:
I'm working on hammering out more details and abstracting out the current hard dependencies on jQuery / CanJS in my current working version. I'll have more on that soon, but I'm running into a few snags with re-entrency that I have to solve first: Vash currently uses one shared Luckily this still won't require any fundamental change to the lexer or parser, so it's still not that invasive a change. Also; such per-instance helpers give us a place to store and expose created live bindings, neatly covering that issue as well. |
I was looking at Ember's mustache binding code for the first time. (wow, what a complex library) They do some crazy things, like parse object "paths": They parse the path and convert it to a "dasherized" class name, which is then used to later locate the binding location in the DOM. So the helper would return something like They also have a concept of sub/child views that are aware that they are children, and whose parent knows which children it has. So if a child is told to rerender, it tells all of it's children to rerender. So that's interesting, maybe you'd already seen it. Helpers as a Class I'm obviously not privy to your code and implementation, but I would rather not have helpers become a class with a prototype. I feel that vash as a whole should be stateless, and so should the helpers. This is why Vash's runtime is extremely minimal. Requiring an instance of something to be passed into a template for rendering also disallows precompiled templates, which is an important feature to me. It's something that sets Vash apart from other razor-based libraries, and is important in a complex application client-side application. A template should be transparent: if you I don't mean to discourage you, but please keep these ideas in mind (or tell me why I'm wrong! :) ) while you're working this out. |
I know about keeping Vash as lean, efficient and adaptive as possible. You expressed concern about that before and my work tries to honor it as much as possible. I actually finished my refactoring last friday and my changes can be summarized as follows: The current situation in the official Vash build is that a reference to the static literal As you can see, the impact is really quite isolated and minimal. It does however remove the shared state on the I'm moving on to separating my live binding implementation from CanJS's |
Cool, can't wait to see what you come up with! I'm not entire sure the magnitude of the I'm thinking about adding in either callbacks or a simple evented pattern for being able to hook into template compilation and rendering. I haven't made any decisions yet, but let me know if that would help you. |
I found a bug with the current layout helpers, where if you call So now I'm super eager to see what you come up with for this Helpers class. I added some tests for this to the suite, 29d42d7, hope this helps your work. |
Ok. I'll try to merge that into my dev branch and see if it passes the test. Will take a bit of time to get going on that though; iteration deadline coming up and such. Probably I'll work on it in my off time in the evening. |
So I just pushed a fix for the bug I mentioned, and updated some of the tests. The compiler code was pretty heavily modified, switching from primarily This change keeps everything working (better than before, since there were bugs :) ), but definitely only works in a single-threaded environment, since it's all global state. I'm wondering if it's time to build something like a "render context", which sounds a little bit like what your |
👍 |
I took a crack at this whole rendering context thing, and pushed my branch: https://github.com/kirbysayshi/vash/tree/tplctx I realize that this is duplicate work, but I wanted to make my own attempt to force deep thought. I left a fairly detailed commit message (c179c36), so I won't repeat it here. I'm also not sure this actually solves your problem. There are a few things in there, like better So I'm curious what you think, as well as how this compares to what you were working on in terms of complication and functionality. |
Ah snap. Just finished the first part of the pull request after merging all the changes you made on the master branch. Hadn't noticed your updates on the tplctx branch. The actual conversion to the class approach seems to coincide for a great deal with the approach that I submitted and is mostly compatible. The main difference seems to be that in your case you have a I like the new approach of using the special Not too sure about flattening the buffer operations onto the helpers prototype. It may make them more accessible, but it feels a bit like clutter. Also, for safety and to prevent tampering; I wouldn't expose On the whole, I'm leaning towards your implementation for the updated marks (with the mentioned speed boost applied) and helpers class, but towards mine on handling the internal buffer state. What do you think? |
I think we agree on most everything here! The I added another commit to the branch around yesterday, where I removed the whole Flattening the buffer ops was because I was thinking of it as a "render context that happened to be named Helpers". :) I think your implementation of I think your implementation of hiding the So once we finalize the merge, I'll port over VTMark stuff using your changes as the primary base. One thing I thought was interesting about the branch was that the rendered template returns the |
Heh. Well look at that. I must've totally missed that when I took a look last evening. Indeed; it already is using the speed up. That's great.
Yeah, I kind of haven't optimized everything yet while it is still 'in a state of flux', so to say. ;-)
Oooh! That is interesting. Promises and delayed evaluation was on my list of things to try and build into Vash as well. (You know; it could solve the problem with sub/ancestor-templates in layout composition helpers needing to be available synchronous as well.) |
The layout stuff is already pretty weird, or at least feels that way due to the system working in both the browser and node. Right now there's actually a bug regarding multiple template inheritance that I think can be fixed by clever VTMark manipulation. If you have a chain of templates, like so: // layout.vash
@html.block('yes')
// page.vash
@html.extend('layout', function(model){
<div class="wrapper">@html.block('yes', function(model){ <p>Default content</p> })
})
// inner.vash
@html.extend('page', function(model){
@html.body('yes', function(){ <p>Indeed!</p> })
}) Output will be: <div class="wrapper"></div>
<p>Indeed!</p> Where I think most people would expect: <div class="wrapper"><p>Indeed!</p></div> VTMarks are interesting, because they are kind of like promises. Not all of them, but using them as a placeholder for something else is definitely promise-like. With asynchronous template loading, it gets really tricky because the engine has no way of knowing if there will be promises loaded in by a sub or parent template. So it might be impossible to know when to fulfill the primary promise on a simple template. |
A straightforward solution to predicatability is to always return a promise for any operation that may potentially be async. Said promise could be resolved either immediately (if the actual operation completes synchronously), or at a later point in time. Welding it to the current layout / master-page / extend mechanism's control flow is more tricky. You'd probably have to change some things there. I do have some ideas that could work, but I'd need time to flesh out the concept before I can present it. (It may also mean that you can get rid of the nested callback architecture...) |
I just can't imagine generated code like this: var __vtemp = model.everyExpressionBasically;
html.buffer.push( __vtemp && __vtemp.promise ? ( html.promises.push( __vtemp.promise ) && __vtemp.promise : __vtemp ); Or rather can't imagine it for every expression.
Layout: One thing I debated was doing it how Jade works: resolve blocks at compile time. But that's a huge architecture change, going almost all the way to the lexer. It would require the parser to load files, spawn lexers + compilers, and might require the parser to actually be a parser and compose complex tokens, like |
I haven't read this whole discussion, but the ideal way to do layouts with sections is probably closer to the way ASP.Net WebPages does it. (see the source) Each Razor template is compiled into a class that inherits the When rendering the page, the system maintains a stack of contexts to handle layout pages. This is built on the ability to put a chunk of template into an anonymous method (lambda / function expression) that returns the rendered content. By contrast, Vash (like ASPX files) currently can only make functions that render their content to the current position in the stream. However, since the function's caller can still extract that content, this shouldn't be an issue. Section contents would be compiled (in content pages) into calls that look like html.defineSection("someName", function() { return content; }); This function would add the function to a private map of section definitions within the topmost context on the stack. After rendering the entire content page, the base From within the layout page, the This architecture allows the layout page to have its own layout without interfering at all with the inner content page. Sections from each level of layout would occupy different contexts on the stack and would not affect each-other For more implementation details, see the source. |
I'm going to agree with SLaks here. When I said in my previous post that I 'had some ideas that might work', I pretty much had the method employed by WebPages in mind. |
I think the simplest way to implement this within Vash would be to pass the context from the current template as a second parameter to the layout template. The content template would write @html.setLayout(someTemplate)
<div>...</div>
@html.defineSection("head", function() {
<style>...</style>
}) The compiled method would end with if (html._layout === null)
return html.__vo.join('');
else
return resolveTemplate(html._layout)(model, html); (The The compiled layout template would start with function compiled(model, parentContext) {
var html = new vash.Helpers();
if (parentContext)
html.parentContext = parentContext; // This is being used as a layout template
html.__vo.push(...);
if (parentContext) {
// Verify that all sections have been rendered
if (!html._renderBodyCalled)
throw new Error("Layout template must call html.renderBody()");
if (!Object.keys(parentContext._sections)
.every(html._renderedSections.hasOwnProperty.bind(html._renderedSections))
throw new Error("Some sections were not rendered");
}
return html.__vo.join('');
} |
I have two goals for layout:
I need a lot of help with 2), and what follows is one possible idea. It seems that the primary pain point, at least what I found while implementing the current layout helpers, is there is no way of knowing when a template has "finished" rendering. Knowing this would allow for block resolution and injection into the proper places and contexts once everything is in place instead of when layout methods are called on the fly. One way to implement this is by allowing
These events would only be used internally. For example, there could be a Passing in the parentContext may also be required, so thanks for that example. |
I'd go with a callback instead of a full event stack. Seems more lightweight. Alternatively, if you're already thinking of integrating promises; use a promise and don't resolve it until the sub-template is done. (Btw; should we maybe split this discussion into its own issue?) |
Absolutely right, done: #15 |
Not really sure where this ended up, but if there is still a live-binding integration sitting somewhere, please reopen! |
Hi.
How interested would you be in an integration layer for live binding? I've been working on integrating Vash for use with CanJS instead of its default EJS view engine and I've got a rudimentary version of live binding working with it.
E.g. I can have Vash process something like:
to have part of the view contents re-render whenever the 'time' property on the model changes, provided the model implements some kind of 'onchange' callback mechanism. In CanJS's case that means binding to the 'change' event on the model using
model.bind( "change", function(){})
As this requires a DOM implementation and should hook into a not-further-specified 'observable model' implementation, it could only ever be done through exposing a generic integration layer that other frameworks should fill in themselves. However, it could prove to be a really powerful feature for clientside apps.
The text was updated successfully, but these errors were encountered: