-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
defer
statements like in Go or finally
in Java?
#82
Comments
Unfortunately this would prevent proper tail calls. I doubt whether it is unavoidable. It is not an option to sacrifice PTC for this. |
You have to trade off here. Scoped cleanup alters the tail context. This will always have semantic consequences. |
Theoretically the only possible ones (without changing semantic rules) are idempotent effects provable having no more footprint than constant space. For example, preserving a flag in each tail call frame to indicate some fixed cleanup always having same behavior after called (once or more) will still allow the behavior effectively PTC because recursive calls won't acquire more space for the activated frames. Arbitrary actions specified in Racket may have similar problems in contracts handling. I still don't have explored more possibilities of the solution yet. |
Deferred functions can be packed into the tail call argument structure so I presume it's not impractical. The most confusing thing is actually the syntax. Given |
Is it allowed to allocate here?
Yep, this is one of the problem, but boring. The simple way is to just defer anything visually possible to defer, i.e. One obvious reason to have syntax like But more generally, there is also the problem to decide the destination: when should |
Deferred expressions are evaluated at scope exits. As variables may be re-declared (later-declared names hide earlier-declared ones) the expression must be rebound at the point of declaration. (Then what would |
This does not necessarily mean that the actual effects of the evaluations are bound at scope exits. Consider memory deallocation: even C++ is free to defer the effects of global deallocation function calls once proven as-if, because the deallocations are not considered observable behaviors in the language. However, once you want the concerned behaviors explicit in the semantics (e.g. the guarantee of PTC), it is the rule-breaker. You will eventually need more rules to override the default ones to make the optimization over uninterested behaviors possible again.
Sounds unclear to me. By naming the feature "hiding", it introduces more names, leaving the hidden ones unchanged. The canonical way to implement local blocks (
This is another idiomatic use which out of the scope of resource cleanup. "Defer" can be still the right name, and RRID is certainly not. (It is actually more close the the semantics of the current C++ possibly throwing destructors, though.) The problem is it even interferes more against PTC. I don't think you can have anything meaningful here before you know the deferred evaluations have precise properties (i.e. the space complexity constraints) consistent to PTC. |
I don't expect it sane, but to rule out such use is also not trivial, besides explicitly being "unspecified". This is essentially the same kind of work to shape the language rules allowing proving it consistent with the PTC guarantee. |
The invariant of Asteria is that values may be copied or erased with no side effects. So memory deallocation is required not to have any side effect The only trade-off is that deferred expressions might not be executed at all in case of initialization failures of its context, due to failure to allocate memory for example.
Consider: var a = "hello";
func get_a() { return& a; }
var a = 42;
std.debug.print("a = $1", a); // prints the second `a`, which is `42`
std.debug.print("get_a() = $1", get_a()); // prints the first `a`, which is "hello" From a low-level point of view, the later-declared reference with name This is purely for convenience. Requiring statements consisting purely of assignment expressions to have distinct grammar construction from definitions with initialization isn't sort of nice.
As mentioned in the first paragraph, scope exits cannot have other side effects other than deferred expressions. If we pack deferred expressions as PTC arguments then we can evaluate them after every PTC (note PTCs can be chained). |
No, that is the headache. Do avoid that py-ish™ stuff. |
Basically I agree, but this is out of the scope. (ISO C++ has requirements on types like |
Virtually the only difference between var a = foo();
bar(a);
var a = foo();
bar(a); and var a = foo();
bar(a);
a = foo();
bar(a); is that the former actually creates two variables while the latter introduces only one variable. The difference can be observed if the name |
This is not about the ease to see the visual difference. It is about the fact that it screws up many kinds of semantic reasoning in unexpected manners, which can even render the specification of the language (if any) almost useless here. This is exactly the lesson you'd better learned from languages like Python, see this example (in zh-CN) for example. |
What Python does is that assignments become definitions implicitly. We do not do that. Assignments (compound or simple) don't become definitions. We have been aware of that simple assignments destroy the contents of destination variables, so it is acceptable to substitute simple assignments with definitions but not the other way around. |
So this is saner than Python, but still confusing. As per the traditional meaning of the block scope, a block creates a fresh environment. Once there is a scope exit, the environment is dropped. If a redeclaration of variable that has already declared implies hiding, there is an expectation of equivalence between:
and
Which have more differences beyond how many variables being created. |
Yes. I presume that, as described in the OP, such deferred callbacks are called during PTC unpacking, which means they execute in rebuilt contexts rather than the same context where they were created. There could be unexpected effects though. |
OK, if this is just the remaining problem... I'd say that's why I'm not fond of implicit blocks and ALGOL-like syntax of blocks which does not emphasize the scope or merely marks the boundary of the blocks visually significant. Generally, blocks are essentially context-sensitive (as lambda abstractions do), but in those curly-braced languages, they are pretended to be not. This only works when the language is lacking of the accessibility to the contexts and the identification of contexts is uninterested. This is certainly not the case where more powerful features are needed. PTC is just very basic one of such features. Enforcing constructs like |
In order for However, all arguments will have been bound by then. So it might be possible that we pass a null pointer as the argument getter which should never be used. Another solution will be having the tail call argument struct carry the zero-ary argument getter from the callee, guaranteeing that all PTC'd contexts have correct, valid zero-ary argument getters. The corollary is that now we have to keep an indefinite number of contexts and disposal of deferred expressions becomes much nasty, owing to the fact that we can't just bail out on exceptions (deferred expressions are always evaluated even another exception is thrown) so we are forced to catch and stash them. |
No description provided.
The text was updated successfully, but these errors were encountered: