New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecate R.converge #1447
Comments
LGTM. This still works: R.converge(R.subtract, [R.multiply, R.add])(4, 3); //=> 5
R.converge(R.subtract, [R.multiply, R.add])(4)(3); //=> 5
R.lift(R.subtract)(R.multiply, R.add)(4, 3); //=> 5
R.lift(R.subtract)(R.multiply, R.add)(4)(3); //=> 5 |
This looks fantastic! Once stated, it seems obvious, but I've been thinking about this one and I'd like to spend a little time looking at the various uses I've had for |
I like this 👍 The only issue it that to those unfamiliar with the applicative functor instance for functions it isn't obvious how this works. |
Would you like to open a pull request for this, @benperez? |
I actually didn't see this until now. |
I don't enjoy thinking about variadic functions. ;) |
Variadic functions are always a problem. But I think we need to make a decision about respecting context. I've always been of the mind that as long as it's reasonably easy to do and doesn't get in the way of anything else, continuing to respect context was a good practice. But @asaf-romano pointed out in #1592 that this has been broken for a while. (I'm not sure, but I'm guessing it was broken during @scott-christophers performance enhancements in #1512.) We've received no complaints about this at all. @asaf-romano only noticed it because a line of existing code looked odd. Perhaps its time to put a stake in ground and say that we won't try to support this any further. It's almost certainly a pretty unusual user who uses Ramda to build methods for her OO API. Should we just stop catering to that? |
The only interesting use case I see for preserving context is pseudo-decorators set on a singleton object; something like this: const MessagesService = {
BASE_URL: "...",
initialize: once(function()
{
}),
getMessages: memoize(function(msgId)
{
}),
someSwitch: cond([
[..., funciton() { return this.(...) }],
[..., funciton() { return this.(...) }],
[..., funciton() { return this.(...) }]
])
} The |
I had to read the source code to see how it worked; you need to know that
|
I think the fact that dispatching, which is a big part of fantasy-land and this library requires use of the This may not affect the API of ramda functions but it will certainly—for better or worse—affect their implementations. I would be in favor of deprecating converge and adding a code branch in lift specifically designed for functions (it respects context). |
Dispatching is an orthogonal concern. When we dispatch, the context that we would have been preserving is lost anyway. The context that we would preserve would be the sort of thing that you can see in The trouble is that many other functions could be used as methods too, but we've only taken this care for the ones that specifically return functions, not the ones that, because of the autocurrying, also could return functions which might be used as methods. For instance, this doesn't work: var region = {
tax: 0.06,
adjustPrices: map(function(val) {
return (1 + this.tax) * val;
})
};
region.adjustPrices([5, 10]); //=> Error: Cannot read property 'tax' of undefined because the relevant line of result[idx] = fn(functor[idx]); If we replaced that with result[idx] = fn.call(this, functor[idx]); Then we would get the expected behavior. But it's far from clear that we would want to do that across all of Ramda's functions. That's part of the reason that I'm leaning toward removing it. Part of it is simply that it's always been on spec. No one has ever requested it. Now that it's been broken for a while, no one has complained that it's missing. Dispatching works differently. Since we are actually calling a method on an object, the existing context is necessarily lost. |
Thanks for the clarification on the state of dispatching versus normal context @CrossEye, the distinction is now clear in my mind. Given that ramda is already so inconsistent about respecting context, I say we just scrap it as a requirement altogether. At the end of the day, if the user wants to use context internally in functions which accept functions, he or she can always simply |
Scrapping context is cool with me, too, if it significantly improves performance. Anyway, back to the original topic – I feel like I can almost understand how one can replace the other – something to do with the Apply spec. If |
Agreed. A lot of the docs that delegate to the user types really need some work. I think we simply still don't quite know how to do that well. I thought I remembered that the |
In the interest of moving this along, I'll open a PR tonight to deprecate |
@benperez: That would be really awesome. It's like @paldepind says:
The way |
Can |
I can't see how, but nothing would surprise me at this point! |
(( 🔔 )) Sorry for being pushy, but i would like to get 0.20 out the door and would like to include deprecating |
No worries, the PR with the deprecated tag is #1649. I can update the docs for lift in another PR. |
I would be helpful if the documentation for how |
@dalgard I might be wrong but I think that |
I agree. Failing that, we should explain the |
I have not been pushing for this, partly because I knew I'd never done my due diligence on the Does someone have as convenient a way to implement |
@asaf-romano thanks for bringing this up, the use of R.juxt = _curry1(function juxt(fns) {
return converge(_arrayOf, fns);
}); depends on supporting variadic functions in both arguments to As I mentioned much farther up the chain, we're going to have to add a special case implementation of
I personally find these kinds of variadic functions very hard to reason about in real code and to me this difficulty grows dramatically when we're talking about higher-order functions that accept, combine, and return variadic functions. I'm personally in favor of moving away from variadic functions in higher-order functions in ramda but I also understand that this is JS, so a lot of other people might not see things that way. There are a couple different options for handling them in |
The issue isn't only with variadic functions.
I used to really dislike I think it's somewhat unrealistic to expect users to use var combine = converge;
var copyAtoB = combine(assoc("b"), [prop("a"), identity]); // {"a": 1, "b": 1}
copyAtoB({ a: 1 });
var copyAtoB_L = lift(assoc("b"))(prop("a"), identity); // {"a": 1, "b": 1}
copyAtoB_L({ a: 1 }) |
@asaf-romano, your example can also be written in terms of a couple of nifty combinators: // C :: (a -> b -> c) -> b -> a -> c
const C = f => y => x => f(x)(y);
// S :: (a -> b -> c) -> (a -> b) -> a -> c
const S = f => g => x => f(x)(g(x));
S(C(R.assoc('b')))(R.prop('a'))({a: 1});
// => {a: 1, b: 1} |
I'm not trying to offend my coworkers that much ;) |
@davidchambers may be of interest for you. I just ported data.aviary.birds (a collection of combinators) to js. https://github.com/fantasyland/fantasy-birds (documentation not finished) |
Neat project, @stoeffel! Also, I'm excited to see Transcribe being used in the wild. :) |
Glad you like it! Transcribe is awesome. The only thing missing is using it for more than one files. (btw sorry for going of topic 😊 ) |
@stoeffel: That's great! |
OK, so to me converge as it currently stands has 3 problems:
We can eliminate all of these problems by reshaping converge functionality to be much simpler and easier to reason about. It seems like @CrossEye has always made the point that we shouldn't let implementation concerns dictate API. It seems strange to let the implementation of |
Responding from the bottom up:
Absolutely. But I'm not quite sure what @asaf-romano meant to say. That elegant little chunk is not the current implementation of
We can, but we need to decide whether we want to. We have not hit 1.0, so we can feel free to change API as we like, but we also have to decide when to break continuity, and for what reasons.
Agreed, this can just go. It's never shown its worth, and no one has ever really asked for it.
I'm not quite sure what you're suggesting by "mixing types and data" What we're doing is unifying the following sorts of signatures into one: (a -> b -> d) -> [(e -> a). (e -> b)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> a). (e -> b), (e -> c)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> f -> a). (e -> f -> g -> b), (e -> c)] -> (e -> f-> g -> d) Now its certainly arguable that this is far too much for one function, but by the same token that we wouldn't want to use the
Part of the somewhat uncomfortable dynamic of Ramda is the interplay between "functional library" and "for Javascript developers". In Javascript variadic functions are a normal part of how one works. Configuring |
Thanks for the thorough and thoughtful reply @CrossEye.
The problem with a lot of ramda's type signatures is that they don't actually represent types, they represent the way that types change based on the data passed into functions.
As you nicely point out, this is a description of many different "types":
The problem is that the second arguments—
I'm unconvinced by this. Variadic functions are supported in Javascript and are even frequently used, but they're by no means necessary or even more expressive than simply using lists for arguments. |
I wrote:
But I was confused. That is what's in HEAD. I was looking at 0.19.1, which has a different implementation. I actually merged Asaf's changes, so I should remember this. 😄 |
I think we start to veer into philosophical grounds here. What makes something a type? It would be easy enough to argue the same way that But it's clear enough that that the current converge(f, [g1, ..., gn]) ≍ (a1, ... ak) => f(g1(a1, ... ak), ... gn(a1, ... ak)); There is plenty of room to argue that this should be simplified, that, for instance
I don't mean to imply that one cannot live without them. I mean that the substantial majority of Javascript developers do in fact use them. My dividing line for Ramda has been pretty simple. It has seemed a good thing to remove variadic functions from Ramda, and they're almost all gone. But Ramda should still work well with users' variadic functions, since this is a library designed for Javascript programmers and not for Haskell programmers slumming it in JS. If you're really looking to do something closer to Haskell in JS, you're sure to end up disappointed: you'll never really know that the function supplied really accepts Rectangles and returns numbers, and without some really expensive run-time checks, you won't know until quite late that the list supplied contains only Rectangles. So, while Ramda does try to be much more principled that many other JS libraries, it does not try to overcome the basic nature of the language. If a user wants that, she should probably try PureScript, ClojureScript, Elm, or at least TypeScript or Flow. |
To me, this issue is less about "doing haskell in JS" and more about principled programming. Ramda already takes all sorts of positions that are at odds with the majority of "Javascript developers"—no mutation, well defined input and output types, avoiding optional arguments, etc. These principles might come from Haskell, Clojure, or elsewhere but I think they're motivated by writing better Javascript, not "slumming it" in an inferior language. Of course it's possible to write Javascript that does all of those things that ramda rejects and still works as intended, but as you've mentioned, Ramda is simply choosing to take a more principled approach. Given that vanilla Javascript let's the user do pretty much whatever he or she want, Ramda's main guiding principle strikes me as "addition by subtraction", which is a good thing. The fact that variadic functions fall on the wrong side of that principle strikes me as a little arbitrary, but I can still respect that you've chosen to put a stake in the ground. Having said all that, maybe this abstraction is a bit too much of a jump for ramda as a library. If the requirements of having nice things (more general abstractions) conflict with what currently makes |
I don't think I've been well able to articulate what is for me a very clear dividing line. Ramda should be quite principled. We don't mutate your data. We don't have optional arguments. Our functions are more strictly typed. But we don't take a particular stand on what you do. If you want to pass to This has been my personal view for a long time. I don't know if it's the correct guideline for the larger community, and perhaps its time to figure that out. |
@CrossEye: I think your view works for the larger community, too. |
This variant of using R.lift(R.subtract)(R.multiply, R.add)(4, 3); //=> 5 Is there a way to achieve the same result in current version of Ramda? |
I guess, this change in behaviour was caused by this PR #1741. |
That's interesting, and I didn't ever notice it. I still often think in terms of Do you see any problems that |
@CrossEye it's totally tangential to the overall discussion, but I haven't really felt that the main benefit with point-free programming is the more compact code, which may be how you think about it as you say, fat arrows do a reasonable job of replacing it. Some other benefits may be:
Having said all these, maybe it reflects my interest in knowing more about the actual benefit of point-free programming. For I used it quite a bit and found drawbacks, at least with current JavaScript browsers and IDEs:
I'm interested in other viewpoints on this, whether similar experiences or on the contrary, maybe with some advice. |
@monfera: Thanks for the feedback. I agree that there are other advantages to using point-free as well as a number of drawbacks. I'm interested in the idea of a graph of functions. Do you get practical benefits from this? Do you use particular tools to manipulate or view such a graph? As to the drawbacks you mention, I don't really have the The second two are related, I believe. If it's harder to understand, it's harder to refactor. And probably vice versa as well. What I think gains back some readability is to always use type signatures on one's code. This often makes it easy to assemble function chains mechanically. For the sort of refactoring you describe, I will usually write new functions and only then see if I can simply rename my old ones or in some other way reuse them. In other words, when I have substantial reworking to do, I tend to rewrite rather than anything that might generally be called refactoring. But having a good test suite means that I can still call it refactoring if anyone asks! 😄 |
@CrossEye yes,
and even with this, I have access to the data at a single edge, while a breakpoint in a lambda expression lets me inspect all available lexical bindings, advance the code to some other line etc. It's like taking a single snapshot through a keyhole as opposed to just opening the door. Dev Tools is great and I like having easy access to its power. Yes there's relation between difficulty of understanding code and difficulty of refactoring. When mentioning rigidity, I assumed good understanding of both the current version and the new version. Even then, the addition of an innocent-looking piece of data, or switch to a new input format may largely change the shape and structure of the code - as you say it can be a rewrite, not a sequence of incremental code changes. You mentioned testing. Small changes often lead to different structures, therefore many functions are superseded, unit tests have to be scrapped and redone, if they're done at the granularity of individual functions and not eg. modules. The brittleness seems to come from the fact that data paths and information representations (eg. object vs array vs tree) are "hardwired" into the code structure, while with lambda expressions, you typically have a bunch of lexical bindings and maybe bindings from outer lambdas, and you're way less restricted from referencing any of these or their parts, useful for prototyping. What just needs a local change in one specific lambda-style function will usually need a different subgraph of So all in all there's nontrivial cost (to me) when using the point-free style, maybe because my code may not be as dominated by The "graph of functions" thing is handy if you want to treat your code as data, and don't want to go through the route of entirely relying on a JavaScript parser. For example, a user of your system may customize the behavior of the application by composing functions, or you want to serialize your logic, or want to run analytics on your source code. A directed acyclic graph is also more language-neutral than full blown JS, so you can move logic between languages if need be. There are other tools that bridge over languages, eg. the Rx family (RxJS etc.) or transducer libraries, and assuming a Ah one more thing with the graph style of coding. You can start with a single-pass implementation of whatever you work on, ie. you apply the resulting module (ie. the root node of the graph) to some value. Then maybe you replace the But again, these are just my thoughts and I probably don't think of some other benefits that go with it. |
They were well-worth sharing. Thank you. |
R.lift
covers all my uses ofR.converge
.Is
R.converge
only necessary to support non-unary "transformation" functions?The text was updated successfully, but these errors were encountered: