Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprecate R.converge #1447

Closed
davidchambers opened this issue Oct 15, 2015 · 50 comments
Closed

deprecate R.converge #1447

davidchambers opened this issue Oct 15, 2015 · 50 comments
Labels

Comments

@davidchambers
Copy link
Member

R.lift covers all my uses of R.converge.

> R.converge(R.subtract, [R.inc, Math.sqrt])(9)
7
> R.lift(R.subtract)(R.inc, Math.sqrt)(9)
7

Is R.converge only necessary to support non-unary "transformation" functions?

@buzzdecafe
Copy link
Member

LGTM. This still works:

R.converge(R.subtract, [R.multiply, R.add])(4, 3); //=> 5
R.converge(R.subtract, [R.multiply, R.add])(4)(3); //=> 5
R.lift(R.subtract)(R.multiply, R.add)(4, 3); //=> 5
R.lift(R.subtract)(R.multiply, R.add)(4)(3); //=> 5

@CrossEye
Copy link
Member

This looks fantastic! Once stated, it seems obvious, but I've been thinking about this one and useWith a lot, and didn't manage to come up with it.

I'd like to spend a little time looking at the various uses I've had for converge, but I'm not coming up with any cases in the abstract that this wouldn't cover.

@paldepind
Copy link
Member

I like this 👍

The only issue it that to those unfamiliar with the applicative functor instance for functions it isn't obvious how this works.

@davidchambers
Copy link
Member Author

Would you like to open a pull request for this, @benperez?

@benperez
Copy link
Contributor

I actually didn't see this until now. lift won't respect context or variadic functions for branching functions at the moment. We could special case it in R.liftif it's important to keep that behavior. Let me know what you guys think.

@davidchambers
Copy link
Member Author

I don't enjoy thinking about variadic functions. ;)

@CrossEye
Copy link
Member

Variadic functions are always a problem. But I think we need to make a decision about respecting context. I've always been of the mind that as long as it's reasonably easy to do and doesn't get in the way of anything else, continuing to respect context was a good practice.

But @asaf-romano pointed out in #1592 that this has been broken for a while. (I'm not sure, but I'm guessing it was broken during @scott-christophers performance enhancements in #1512.) We've received no complaints about this at all. @asaf-romano only noticed it because a line of existing code looked odd.

Perhaps its time to put a stake in ground and say that we won't try to support this any further. It's almost certainly a pretty unusual user who uses Ramda to build methods for her OO API. Should we just stop catering to that?

@asaf-romano
Copy link
Member

The only interesting use case I see for preserving context is pseudo-decorators set on a singleton object; something like this:

const MessagesService = {
    BASE_URL: "...",

    initialize: once(function()
    {
    }),

    getMessages: memoize(function(msgId)
    {
    }),

    someSwitch: cond([
       [..., funciton() { return this.(...) }],
       [..., funciton() { return this.(...) }],
       [..., funciton() { return this.(...) }]
    ])
}

The cond case is something easy to give up on, but the first two are a tough call.

@kwijibo
Copy link
Contributor

kwijibo commented Jan 31, 2016

The only issue it that to those unfamiliar with the applicative functor instance for functions it isn't obvious how this works.

I had to read the source code to see how it worked; you need to know that R.lift uses R.ap, which has special support for functions' .apply method. This seems to go beyond what the documentation says R.lift is capable of supporting, ie:

an Array or other object that satisfies the FantasyLand Apply spec.

@benperez
Copy link
Contributor

benperez commented Feb 4, 2016

I think the fact that dispatching, which is a big part of fantasy-land and this library requires use of the this keyword, we should probably take the stance that ramda functions should always respect context. You could go a step further and say that all ramda functions that accept functions will evaluate those functions in the same context that they are called in.

This may not affect the API of ramda functions but it will certainly—for better or worse—affect their implementations. I would be in favor of deprecating converge and adding a code branch in lift specifically designed for functions (it respects context).

@CrossEye
Copy link
Member

CrossEye commented Feb 4, 2016

Dispatching is an orthogonal concern. When we dispatch, the context that we would have been preserving is lost anyway.

The context that we would preserve would be the sort of thing that you can see in _curryN or _pipe, where functions that alter functions often maintain the this supplied to them, so that they might be used as OO methods when necessary.

The trouble is that many other functions could be used as methods too, but we've only taken this care for the ones that specifically return functions, not the ones that, because of the autocurrying, also could return functions which might be used as methods.

For instance, this doesn't work:

var region = {
  tax: 0.06,
  adjustPrices: map(function(val) {
    return (1 + this.tax) * val;
  })
};

region.adjustPrices([5, 10]); //=> Error: Cannot read property 'tax' of undefined

because the relevant line of map doesn't preserve context:

    result[idx] = fn(functor[idx]);

If we replaced that with

    result[idx] = fn.call(this, functor[idx]);

Then we would get the expected behavior.

But it's far from clear that we would want to do that across all of Ramda's functions. That's part of the reason that I'm leaning toward removing it. Part of it is simply that it's always been on spec. No one has ever requested it. Now that it's been broken for a while, no one has complained that it's missing.

Dispatching works differently. Since we are actually calling a method on an object, the existing context is necessarily lost.

@benperez
Copy link
Contributor

benperez commented Feb 4, 2016

Thanks for the clarification on the state of dispatching versus normal context @CrossEye, the distinction is now clear in my mind.

Given that ramda is already so inconsistent about respecting context, I say we just scrap it as a requirement altogether. At the end of the day, if the user wants to use context internally in functions which accept functions, he or she can always simply bind any necessary context to the function itself.

@dalgard
Copy link

dalgard commented Feb 16, 2016

Scrapping context is cool with me, too, if it significantly improves performance. Anyway, back to the original topic – converge vs. lift ;)

I feel like I can almost understand how one can replace the other – something to do with the Apply spec. If converge is actually entirely replaceable with lift, it would be really cool if the documentation mentioned how lift can be used with functions instead of just arrays (if I'm making even the least bit of sense here...).

@CrossEye
Copy link
Member

Agreed. A lot of the docs that delegate to the user types really need some work. I think we simply still don't quite know how to do that well. I thought I remembered that the lift / liftN docs had Maybe examples; shows what I know.

@benperez
Copy link
Contributor

In the interest of moving this along, I'll open a PR tonight to deprecate converge and revamp the docs for lift and liftN to include info and examples for functions. If we don't settle on deprecating converge then we can at least improve the docs for lift.

@arcseldon
Copy link
Contributor

@benperez - like the way sanctuary describes lift - just thought I'd mention it as you're revamping docs.

@dalgard
Copy link

dalgard commented Feb 17, 2016

@benperez: That would be really awesome. It's like @paldepind says:

The only issue it that to those unfamiliar with the applicative functor instance for functions it isn't obvious how this works.

The way lift is described in Sanctuary probably makes sense, but needs mo' words for the uninitiated.

@dalgard
Copy link

dalgard commented Feb 17, 2016

Can lift somehow also replace useWith, btw?

@CrossEye
Copy link
Member

Can lift somehow also replace useWith, btw?

I can't see how, but nothing would surprise me at this point!

@buzzdecafe
Copy link
Member

In the interest of moving this along, I'll open a PR tonight to deprecate converge

(( 🔔 ))

Sorry for being pushy, but i would like to get 0.20 out the door and would like to include deprecating converge as part of it. lmk if you'd rather that i just go ahead and add the tag.

@benperez
Copy link
Contributor

Sorry for being pushy, but i would like to get 0.20 out the door

No worries, the PR with the deprecated tag is #1649. I can update the docs for lift in another PR.

@dalgard
Copy link

dalgard commented Feb 18, 2016

I would be helpful if the documentation for how converge can be replaced by lift were in place before the deprecation.

@benperez
Copy link
Contributor

@dalgard I might be wrong but I think that 0.20 will just include a deprecation tag in the docs, R.converge won't actually be removed from ramda until 0.21.

@davidchambers
Copy link
Member Author

I would be helpful if the documentation for how converge can be replaced by lift were in place before the deprecation.

I agree. Failing that, we should explain the convergelift translation in the upgrade guide.

@CrossEye
Copy link
Member

I have not been pushing for this, partly because I knew I'd never done my due diligence on the compose -> lift change on my own. But I think the question from @asaf-romano makes this much more interesting.

Does someone have as convenient a way to implement juxt that come anywhere close to the expressive elegance of converge(Array.of)?

@benperez
Copy link
Contributor

@asaf-romano thanks for bringing this up, the use of converge in the definition of juxt highlights something that we'd need to consider in a lift based formulation of converge.

R.juxt = _curry1(function juxt(fns) {
  return converge(_arrayOf, fns);
});

depends on supporting variadic functions in both arguments to converge. _arrayOf and the elements of fns are both variadic. This presents a challenge when trying to lift a function to act on other functions.

As I mentioned much farther up the chain, we're going to have to add a special case implementation of R.liftN for functions if we want to continue to support type signatures like:

R.converge :: (*ARGS, ELLIPSIS* -> z) -> [(*ANOTHERARGS, ANOTHERELLIPSIS* -> x1), (*YETANOTHERARGS, YETANOTHERELLPISIS* -> x2), FREEELLIPSIS] -> *ONEMOREARGS, ONEMOREELLIPSIS* -> z

I personally find these kinds of variadic functions very hard to reason about in real code and to me this difficulty grows dramatically when we're talking about higher-order functions that accept, combine, and return variadic functions.

I'm personally in favor of moving away from variadic functions in higher-order functions in ramda but I also understand that this is JS, so a lot of other people might not see things that way. There are a couple different options for handling them in lift if we choose to support them.

@asaf-romano
Copy link
Member

The issue isn't only with variadic functions. converge did a couple more things with its focus on functions:

  • It set the arity based on the input functions.
  • it curried the resulting function

I used to really dislike converge for some reason, but over time, and especially after it was changed to take the combined functions as a list, I realized that my only problem with it is its name. I'd rather go with something like combine.

I think it's somewhat unrealistic to expect users to use lift this way. The abstraction is just a little over the top, too far away from the lamda you're trying to write:

var combine = converge;

var copyAtoB = combine(assoc("b"), [prop("a"), identity]); // {"a": 1, "b": 1}
copyAtoB({ a: 1 }); 

var copyAtoB_L = lift(assoc("b"))(prop("a"), identity); // {"a": 1, "b": 1}
copyAtoB_L({ a: 1 })

@davidchambers
Copy link
Member Author

@asaf-romano, your example can also be written in terms of a couple of nifty combinators:

//    C :: (a -> b -> c) -> b -> a -> c
const C = f => y => x => f(x)(y);

//    S :: (a -> b -> c) -> (a -> b) -> a -> c
const S = f => g => x => f(x)(g(x));

S(C(R.assoc('b')))(R.prop('a'))({a: 1});
// => {a: 1, b: 1}

@asaf-romano
Copy link
Member

I'm not trying to offend my coworkers that much ;)

@stoeffel
Copy link

@davidchambers may be of interest for you. I just ported data.aviary.birds (a collection of combinators) to js. https://github.com/fantasyland/fantasy-birds (documentation not finished)
It also contains C and S cardinal and starling.

@davidchambers
Copy link
Member Author

Neat project, @stoeffel! Also, I'm excited to see Transcribe being used in the wild. :)

@stoeffel
Copy link

Neat project, @stoeffel! Also, I'm excited to see Transcribe being used in the wild. :)

Glad you like it! Transcribe is awesome. The only thing missing is using it for more than one files. (btw sorry for going of topic 😊 )

@CrossEye
Copy link
Member

@stoeffel: That's great!

@benperez
Copy link
Contributor

OK, so to me converge as it currently stands has 3 problems:

  1. Accepts a variadic function as its first argument
    We can't know how many functions are supposed to be supplied as input transforming functions until the user actually supplies how many he or she wants. This is a source of ambiguity since we're mixing the type of the function with the data passed into it.
  2. Accepts functions with different lengths in its second argument
    The length of the resulting function is determined by scanning through the functions passed in and taking the max length. Again we're mixing types and data. I think having functions in a pipeline which operate on multiple arguments can be useful, but I'd rather see a more methodical approach like the one proposed by @scott-christopher in this thread.
  3. Context
    It seems like it's already been established in this thread that this isn't something we care to support in this function anymore.

We can eliminate all of these problems by reshaping converge functionality to be much simpler and easier to reason about. It seems like @CrossEye has always made the point that we shouldn't let implementation concerns dictate API. It seems strange to let the implementation of R.juxt dictate the API for R.converge.

@CrossEye
Copy link
Member

Responding from the bottom up:

It seems strange to let the implementation of R.juxt dictate the API for R.converge.

Absolutely. But I'm not quite sure what @asaf-romano meant to say. That elegant little chunk is not the current implementation of juxt; perhaps it's better than our current one, perhaps not, but it did a good job of demonstrating that lift(fn)(f1, f2, ...) is not a drop-in replacement for our current converge(fn, [f1, f2, ...]).

We can eliminate all of these problems by reshaping converge functionality to be much simpler and easier to reason about.

We can, but we need to decide whether we want to. We have not hit 1.0, so we can feel free to change API as we like, but we also have to decide when to break continuity, and for what reasons.

(3.) * Context*
It seems like it's already been established in this thread that this isn't something we care to support in this function anymore.

Agreed, this can just go. It's never shown its worth, and no one has ever really asked for it.

(2.) Accepts functions with different lengths in its second argument
The length of the resulting function is determined by scanning through the functions passed in and taking the max length. Again we're mixing types and data. I think having functions in a pipeline which operate on multiple arguments can be useful, but I'd rather see a more methodical approach like the one proposed by @scott-christopher in this thread

I'm not quite sure what you're suggesting by "mixing types and data" What we're doing is unifying the following sorts of signatures into one:

(a -> b -> d) -> [(e -> a). (e -> b)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> a). (e -> b), (e -> c)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> f -> a). (e -> f -> g -> b), (e -> c)] -> (e -> f-> g -> d)

Now its certainly arguable that this is far too much for one function, but by the same token that we wouldn't want to use the juxt implementation to drive an API change to converge, is the same not true about using lift to determine what we want this to be? This is only coming up in the context of replacing the current implementation with a more elegant one.

(1.) Accepts a variadic function as its first argument
We can't know how many functions are supposed to be supplied as input transforming functions until the user actually supplies how many he or she wants. This is a source of ambiguity since we're mixing the type of the function with the data passed into it.

Part of the somewhat uncomfortable dynamic of Ramda is the interplay between "functional library" and "for Javascript developers". In Javascript variadic functions are a normal part of how one works. Configuring converge with a variadic combining function does feel very much in the spirit of the language.

@benperez
Copy link
Contributor

Thanks for the thorough and thoughtful reply @CrossEye.

I'm not quite sure what you're suggesting by "mixing types and data" What we're doing is unifying the following sorts of signatures into one:

The problem with a lot of ramda's type signatures is that they don't actually represent types, they represent the way that types change based on the data passed into functions.
The signature for converge:

(x1 → x2 → … → z) → [(a → b → … → x1), (a → b → … → x2), …] → (a → b → … → z)

As you nicely point out, this is a description of many different "types":

(a -> b -> d) -> [(e -> a), (e -> b)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> a), (e -> b), (e -> c)] -> (e -> d)
(a -> b -> c -> d) -> [(e -> f -> a), (e -> f -> g -> b), (e -> c)] -> (e -> f-> g -> d)

The problem is that the second arguments—[(e -> a), (e -> b)], [(e -> a), (e -> b), (e -> c)], and [(e -> f -> a), (e -> f -> g -> b), (e -> c)]—aren't really types. [(e -> f -> a), (e -> f -> g -> b), (e -> c)] is a list containing 3 functions that have 3 different types, that's what I mean by data. Furthermore, the type of the returned function is dependent on what functions (data) you pass in the second argument. It's impossible to represent the type of the function a priori because it's behavior is to change it's type based on the data passed into it.

In Javascript variadic functions are a normal part of how one works

I'm unconvinced by this. Variadic functions are supported in Javascript and are even frequently used, but they're by no means necessary or even more expressive than simply using lists for arguments.

@CrossEye
Copy link
Member

I wrote:

But I'm not quite sure what @asaf-romano meant to say. That elegant little chunk is not the current implementation of juxt.

But I was confused. That is what's in HEAD. I was looking at 0.19.1, which has a different implementation. I actually merged Asaf's changes, so I should remember this. 😄

@CrossEye
Copy link
Member

CrossEye commented Feb 20, 2016

The problem with a lot of ramda's type signatures is that they don't actually represent types, they represent the way that types change based on the data passed into functions.

I think we start to veer into philosophical grounds here. What makes something a type? It would be easy enough to argue the same way that :: (a -> b) -> f a -> f b doesn't represent anything on its own except for a vaguely connected group of types such as :: (Number -> String) -> [Number] -> [String] and :: (Rectangle -> Number) -> Maybe Rectangle -> Maybe Number.

But it's clear enough that that the current converge does represent something meaningful:

converge(f, [g1, ..., gn])  (a1, ... ak) => f(g1(a1, ... ak), ... gn(a1, ... ak));

There is plenty of room to argue that this should be simplified, that, for instance k should be fixed at 1, or n at 2. I absolutely love the arrow approach @scott-christopher is discussing. But the current version has shown itself to be a useful function, and we do need to consider what we'd give up in restricting it.

In Javascript variadic functions are a normal part of how one works

I'm unconvinced by this. Variadic functions are supported in Javascript and are even frequently used, but they're by no means necessary or even more expressive than simply using lists for arguments.

I don't mean to imply that one cannot live without them. I mean that the substantial majority of Javascript developers do in fact use them. My dividing line for Ramda has been pretty simple. It has seemed a good thing to remove variadic functions from Ramda, and they're almost all gone. But Ramda should still work well with users' variadic functions, since this is a library designed for Javascript programmers and not for Haskell programmers slumming it in JS.

If you're really looking to do something closer to Haskell in JS, you're sure to end up disappointed: you'll never really know that the function supplied really accepts Rectangles and returns numbers, and without some really expensive run-time checks, you won't know until quite late that the list supplied contains only Rectangles. So, while Ramda does try to be much more principled that many other JS libraries, it does not try to overcome the basic nature of the language. If a user wants that, she should probably try PureScript, ClojureScript, Elm, or at least TypeScript or Flow.

@benperez
Copy link
Contributor

If you're really looking to do something closer to Haskell in JS, you're sure to end up disappointed: you'll never really know that the function supplied really accepts Rectangles and returns numbers, and without some really expensive run-time checks, you won't know until quite late that the list supplied contains only Rectangles

To me, this issue is less about "doing haskell in JS" and more about principled programming. Ramda already takes all sorts of positions that are at odds with the majority of "Javascript developers"—no mutation, well defined input and output types, avoiding optional arguments, etc. These principles might come from Haskell, Clojure, or elsewhere but I think they're motivated by writing better Javascript, not "slumming it" in an inferior language. Of course it's possible to write Javascript that does all of those things that ramda rejects and still works as intended, but as you've mentioned, Ramda is simply choosing to take a more principled approach. Given that vanilla Javascript let's the user do pretty much whatever he or she want, Ramda's main guiding principle strikes me as "addition by subtraction", which is a good thing.

The fact that variadic functions fall on the wrong side of that principle strikes me as a little arbitrary, but I can still respect that you've chosen to put a stake in the ground.

Having said all that, maybe this abstraction is a bit too much of a jump for ramda as a library. If the requirements of having nice things (more general abstractions) conflict with what currently makes converge really powerful to most users, then we should probably revert its deprecation.

@CrossEye
Copy link
Member

Ramda already takes all sorts of positions that are at odds with the majority of "Javascript developers"—no mutation, well defined input and output types, avoiding optional arguments, etc. [ ... ] Of course it's possible to write Javascript that does all of those things that ramda rejects and still works as intended, but as you've mentioned, Ramda is simply choosing to take a more principled approach. Given that vanilla Javascript let's the user do pretty much whatever he or she want, Ramda's main guiding principle strikes me as "addition by subtraction", which is a good thing.

I don't think I've been well able to articulate what is for me a very clear dividing line. Ramda should be quite principled. We don't mutate your data. We don't have optional arguments. Our functions are more strictly typed.

But we don't take a particular stand on what you do. If you want to pass to reduce a function that mutates the accumulator, we simply don't care. If the first function you pass to pipe contains optional parameters, it might be odd, but it's your lookout. If you are passing different-arity functions to converge, you get a function which might seem strange to me, but if it works for you, that's fine.

This has been my personal view for a long time. I don't know if it's the correct guideline for the larger community, and perhaps its time to figure that out.

@dalgard
Copy link

dalgard commented Feb 26, 2016

@CrossEye: I think your view works for the larger community, too.

@kapooostin
Copy link

This variant of using lift does no longer work since v.0.22

R.lift(R.subtract)(R.multiply, R.add)(4, 3); //=> 5

Test in v.0.21

Test in v.0.22

Is there a way to achieve the same result in current version of Ramda?

@kapooostin
Copy link

I guess, this change in behaviour was caused by this PR #1741.

@CrossEye
Copy link
Member

That's interesting, and I didn't ever notice it.

I still often think in terms of converge, and then try to decide if I can use lift instead. This is probably not a good habit, but converge (and useWith) can really become habit-forming. Now that I'm always working in ES6 environments, point-free is much less of a concern, since arrows make for simple lambdas in JS. And I use these functions much less.

Do you see any problems that lift doesn't do this anymore?

@monfera
Copy link

monfera commented Dec 26, 2017

@CrossEye it's totally tangential to the overall discussion, but I haven't really felt that the main benefit with point-free programming is the more compact code, which may be how you think about it as you say, fat arrows do a reasonable job of replacing it. Some other benefits may be:

  • not using lambda expressions, and therefore not using bindings (symbol names) so the thing is less domain dependent
  • of course, names can be simply a and b but in this case there's even less value with using a lambda expression - it doesn't give the code reader an intuition of what the variables are, eg. unitPrice and quantity
  • using point-free is a way for structuring code into a graph, where for example a converge or useWith is a node, and its arguments are nodes too; entire programs can be written this way, while the use of lambdas breaks this graph property (the graph property can be useful if one writes a program that transforms a point-free program)

Having said all these, maybe it reflects my interest in knowing more about the actual benefit of point-free programming. For I used it quite a bit and found drawbacks, at least with current JavaScript browsers and IDEs:

  • (superficial) it's easier to put in a debugger statement or set a Dev Tools breakpoint on a line in a {...} function body than doing the same with point-free - having to wrap around a thing, cutting, inserting some tap()-like thing, and placing the cut thing into it, esp. that ramda etc. don't like/use variadic HOFs - and forget about Dev Tools breakpoints (yes I find it incredibly useful to see values; if someone doesn't, I'm interested in learning how one can wean)
  • (deep) I found that code refactored into a point-free style looked compact and elegant, but (for me) harder to understand, presumably due to lower redundancy compared to domain-specific names in lexical bindings in lambdas, whether the code was written by an (excellent) coworker or by me the previous day
  • (deeper) for some reason, nontrivial point-free code was rigid for me to refactor, if the use case changed and some previous assumption stopped being valid, or the requirements changed slightly - it often required a quite elaborate refactoring that ended up in a fairly different shaped new point-free code, so it felt to me that point-free is more brittle than code with lexical bindings

I'm interested in other viewpoints on this, whether similar experiences or on the contrary, maybe with some advice.

@CrossEye
Copy link
Member

@monfera: Thanks for the feedback.

I agree that there are other advantages to using point-free as well as a number of drawbacks. I'm interested in the idea of a graph of functions. Do you get practical benefits from this? Do you use particular tools to manipulate or view such a graph?

As to the drawbacks you mention, I don't really have the debugger issue. Much of my point-free code is in pipe/compose chains, where it's easy to add a tap, even one that contains a debugger statement. But I don't step through code all that often.

The second two are related, I believe. If it's harder to understand, it's harder to refactor. And probably vice versa as well.

What I think gains back some readability is to always use type signatures on one's code. This often makes it easy to assemble function chains mechanically.

For the sort of refactoring you describe, I will usually write new functions and only then see if I can simply rename my old ones or in some other way reuse them. In other words, when I have substantial reworking to do, I tend to rewrite rather than anything that might generally be called refactoring. But having a good test suite means that I can still call it refactoring if anyone asks! 😄

@monfera
Copy link

monfera commented Dec 27, 2017

@CrossEye yes, tap is easiest to add in pipe/compose esp. that they're still variadic; all other nodes of interest would need to be wrapped around eg. by

const breakpoint = originalNode => compose(tap(d => {debugger; return d}), originalNode)`

and even with this, I have access to the data at a single edge, while a breakpoint in a lambda expression lets me inspect all available lexical bindings, advance the code to some other line etc. It's like taking a single snapshot through a keyhole as opposed to just opening the door. Dev Tools is great and I like having easy access to its power.

Yes there's relation between difficulty of understanding code and difficulty of refactoring. When mentioning rigidity, I assumed good understanding of both the current version and the new version. Even then, the addition of an innocent-looking piece of data, or switch to a new input format may largely change the shape and structure of the code - as you say it can be a rewrite, not a sequence of incremental code changes. You mentioned testing. Small changes often lead to different structures, therefore many functions are superseded, unit tests have to be scrapped and redone, if they're done at the granularity of individual functions and not eg. modules.

The brittleness seems to come from the fact that data paths and information representations (eg. object vs array vs tree) are "hardwired" into the code structure, while with lambda expressions, you typically have a bunch of lexical bindings and maybe bindings from outer lambdas, and you're way less restricted from referencing any of these or their parts, useful for prototyping. What just needs a local change in one specific lambda-style function will usually need a different subgraph of converge, useWith, compose, zip etc. When frustrated with this friction, I even started thinking of point-free as less declarative than lambda-style because it seemed to make it harder for me to declare my intent when I was in an exploratory mode and didn't code something that I fully understood from the outset and never needed change. Felt like freely waving a data fabric vs. having to use needlepoint to thread the data. Point-free code was less stable, small changes perturbing it significantly. I think part of the merit of declarative programming is that the "truths" you declare are efficient (in development time) to curate over time. Ended up mixing point-free with lambda style on personal projects, ie. quite a few pipe ops and even lift, but avoiding the otherwise neat useWith, converge, lens (on client projects I follow whatever is the team decision).

So all in all there's nontrivial cost (to me) when using the point-free style, maybe because my code may not be as dominated by pipe/compose as yours. This is why I'm on the lookout for the benefits that justify it.

The "graph of functions" thing is handy if you want to treat your code as data, and don't want to go through the route of entirely relying on a JavaScript parser. For example, a user of your system may customize the behavior of the application by composing functions, or you want to serialize your logic, or want to run analytics on your source code. A directed acyclic graph is also more language-neutral than full blown JS, so you can move logic between languages if need be. There are other tools that bridge over languages, eg. the Rx family (RxJS etc.) or transducer libraries, and assuming a ramda-like library or a target language with FP support, one can transfer reducer functions etc. without a major rewrite.

Ah one more thing with the graph style of coding. You can start with a single-pass implementation of whatever you work on, ie. you apply the resulting module (ie. the root node of the graph) to some value. Then maybe you replace the ramda functions you use to lifted functions that work on eg. reactive streams (observables) rather than values. The graph doesn't need to change and now it works in streaming mode rather than in batch mode.

But again, these are just my thoughts and I probably don't think of some other benefits that go with it.

@CrossEye
Copy link
Member

But again, these are just my thoughts...

They were well-worth sharing. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests