Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do "smart pipelines" allow usage of curried functions? #116

Closed
masaeedu opened this issue Apr 5, 2018 · 92 comments
Closed

Do "smart pipelines" allow usage of curried functions? #116

masaeedu opened this issue Apr 5, 2018 · 92 comments

Comments

@masaeedu
Copy link

masaeedu commented Apr 5, 2018

I have the following code:

const size = x => x |> Iter.map(Fn.const(1)) |> Iter.fold(Int.plus)(0)

Both map and fold are curried functions of multiple arguments (2 and 3 respectively).

The TC39 proposal has a slide that says of the "smart pipeline" proposal:

Only identifiers and dots; no ( ), [ ], or other ops

Does this mean the code above would be illegal? How would I express this instead?

Here is a more extreme example of the same concept: pipelining through multiple partially applied 2-ary functions.

  const downloadAndVerify =
    checkExists
    |> FSFX.chain(exists => (exists ? doNothing : download))
    |> FSFX.chain(_ => computeHash)
    |> FSFX.chain(hash => fs =>
      checksum === hash
        ? Flt.of(undefined)
        : Flt.reject(
            `Checksum of downloaded file was ${hash}: expected ${checksum}`
          )
    );
@mAAdhaTTah
Copy link
Collaborator

In smart pipelines, it would be expressed thus:

const size = x => x |> Iter.map(Fn.const(1))(#) |> Iter.fold(Int.plus)(0)(#)

You need to include the lexical topic token (currently #) to make it explicit. It's intended to avoid footguns.

@masaeedu
Copy link
Author

masaeedu commented Apr 5, 2018

@mAAdhaTTah What kind of footguns does this avoid? When you're programming in a pointfree style as shown above, you're basically going to be writing # every time, often in an awkward place at the end of a multiline function application. Off the top of my head I can think of map, filter, reduce, scan, flatMap, concat, etc. etc, all of which take more than one argument, and must have all but the last applied away in order to sensibly use them in a pipeline.

There needs to be some way of making reverse application play well with forward function application, otherwise it will be very painful to use.

@mAAdhaTTah
Copy link
Collaborator

mAAdhaTTah commented Apr 5, 2018

@masaeedu Smart Pipelines aren't optimized for pointfree-style, except insofar as x |> double is valid. It's also not that common in JS-land in general, so there's an argument that optimizing for avoiding footguns in the common case is more important.

From the Smart Pipelines README:

The writer must clarify which of these reasonable interpretations is correct:

input |> object.method(#, x, y);
input |> object.method(x, y, #);
input |> object.method(x, y)(#);

There's more details there. I'll let @js-choi expand on the argument for this though, as this is his proposal.

@masaeedu
Copy link
Author

masaeedu commented Apr 5, 2018

Neither of the first two seem like a reasonable interpretation for an operator a |> b === b(a), for b === object.method(x, y), but I won't argue that point. If this is not a supported use case for this operator, I'd request that no attempt be made to "subsume" function composition using it. A separate function composition operator can be provided that just does simple function composition and facilitates programming in the style above.

@masaeedu
Copy link
Author

masaeedu commented Apr 5, 2018

If there is still any time to tweak the proposal, you might consider just interpreting all a |> b where b does not explicitly contain # as a |> (b)(#) === (b)(a).

This means input |> await object.method(x, y) is interpreted as (await object.method(x, y))(input). If this is not what the user wanted, they can explicitly specify await (object.method(x, y)(#)) or whatever it is they actually meant. Mentally desugaring any code involving |> would be quite straightforward; if b in a |> b doesn't contain #, desugar to (b)(a), irrespective of what else the b expression holds.

@js-choi
Copy link
Collaborator

js-choi commented Apr 5, 2018

Edit: Oh, looks like there’s been some updates since I last loaded the page. I’ll reply to the other messages when I can.


Thanks for the question. The answer is that, yes, smart pipelines can accommodate curried functions. But if you create the curried functions inline, then you need to use topic style.

This phrase:

Only identifiers and s; no ( ), [ ], or other ops

…applies only to smart pipelines’ bare style, which is a special convenience syntax for simple, unambiguous unary functions. All other expressions, including function calls on function calls, are supposed to use topic style.

const size = x => x |> Iter.map(Fn.const(1))(#) |> Iter.fold(Int.plus)(0)(#)

One benefit of this is semantic clarity. To summarize that link, in general, input |> f(otherArg) is ambiguous between three reasonable interpretations:
f(otherArg, input),
f(input, otherArg), and
f(otherArg)(input).

With smart pipelines, input |> f(otherArg) is an early error, which forces the writer to clarify which interpretation they mean, for the human reader’s benefit:
input |> f(otherArg, #),
input |> f(#, otherArg), or
input |> f(otherArg)(#).

As for your second code block, I’m not sure what FSFX and Flt are; they don’t resemble Ramda’s or Lodash/FP’s APIs. You’d have to give definitions of your helper functions for me to translate it to smart pipelines. The best I can guess right now is:

const downloadAndVerify = input => input
|> checkExists
|> checkExists(#) ? # : download(#)
|> do {
  const hash = computeHash(#);
  if (checksum === hash) #;
  else throw new Error(`Checksum of downloaded file was ${hash}: expected ${checksum}`);
};

Smart pipelines can accommodate curried functions. But if you create the curried functions inline, then you need to use topic style, not bare style.

@masaeedu
Copy link
Author

masaeedu commented Apr 5, 2018

@js-choi The FSFX.chain function is a binary function:

chain :: (a -> FileSystem -> Fluture b) -> (FileSystem -> Fluture a) -> (FileSystem -> Fluture b)

equivalent to monadic bind. checkExists, doNothing, download etc. are all instances of the monad, i.e. functions FileSystem -> Fluture x.

You can't just unwrap things as you've done because the chain is essential; this is a monadic computation. Similarly, if I have a functor of things, I can't just get rid of my map and unpack the computations to the top level.

Here's a simpler example with observables:

  inputObservable
    |> Obs.map(x => x * 2)
    |> Obs.flatMap(x => Obs.delay(1000, x))
    |> Obs.filter(x => x < 50)
    |> Obs.scan(x => y => x + y)

@masaeedu
Copy link
Author

masaeedu commented Apr 5, 2018

Regarding your three reasonable interpretations, I don't think any of those is reasonable except for the last one. a |> b === b(a) by simple lexical substitution implies that foo |> x.y() is x.y()(foo). I guess what seems intuitive is subjective, so perhaps it really is a significant problem for users that a |> b === b(a) suggests foo |> x.y() desugars to x.y(foo). I'm just one data point.

@mAAdhaTTah
Copy link
Collaborator

Neither of the first two seem like a reasonable interpretation for an operator

Elixir, for example, does in fact slot in the pipeline value as the first parameter of the function, so that is certainly a feasible interpretation when coming from another language. I don't think the second is as likely, as JS doesn't have built-in currying, but it's not entirely unreasonable either. In any case, if it's ambiguous at all, forcing the developer to be explicit seems entirely reasonable.

That said, it sounds like you'd prefer F# Pipelines overall, as that mirrors more closely your expectations for how the operator will function. None of the current proposals have been "adopted", so there's plenty of time to adjust things as you like.

@mAAdhaTTah
Copy link
Collaborator

One thing I'll mention is that if we go with Smart Pipelines, there is less of a need to use curried functions. You could just have a normal n-ary functions with a placeholder. So it wouldn't require the parens and would look thus:

const size = x => x |> Iter.map(Fn.const(1), #) |> Iter.fold(Int.plus, 0, #)

@masaeedu
Copy link
Author

masaeedu commented Apr 6, 2018

@mAAdhaTTah That doesn't help you with x |> Iter.map(Int.add(1)); Int.add still needs to be curried. There's many other examples of functional programming patterns that can not be accommodated, and even if they could be, would be restricted to within the body of the pipeline operator. Curried functions are a perfectly adequate solution to the problem of needing curried functions.

@js-choi
Copy link
Collaborator

js-choi commented Apr 6, 2018

@mAAdhaTTah is correct that Elixir’s pipe operator tacitly inserts its input into first parameters. Clojure supports both tacit first- and last-parameter insertion. The R language’s magrittr library also uses tacit first-parameter insertion.

More importantly, there are many existing and idiomatic JavaScript APIs whose functions’ “primary inputs” are first parameters, such as DOM fetch, ES new Uint8Array, DOM new WebSocket, Node fs.readSync, and pr’s fs.read. It goes either way in real APIs; both ways are reasonable interpretations, as well as autocurrying-style unary-function creation.


When you're programming in a pointfree style as shown above, you're basically going to be writing # every time, often in an awkward place at the end of a multiline function application. Off the top of my head I can think of map, filter, reduce, scan, flatMap, concat, etc. etc, all of which take more than one argument, and must have all but the last applied away in order to sensibly use them in a pipeline.

With regard to “you’re basically going to be writing # every time,” this is not true for applying to unary functions. o.m(input) can simply be written as input |> o.m. (It is true for n-ary functions, but, as discussed above, pipelining into n-ary function calls is ambiguous anyway.)


If there is still any time to tweak the proposal, you might consider just interpreting all a |> b where b does not explicitly contain # as a |> (b)(#) === (b)(a).

This possibility was considered. It was passed over in favor of the current simple covering rule (“you must include #, unless it’s a simple function call”). Syntactic locality is another goal of the proposal. Such context sensitivity would require frequent large lookahead; this is a footgun for human readers.

Consider v |> foo(blah, bar, partial(foo, x, y + z), blah). Is this a tacit function call or is it a topic-style expression? For a human to determine this crucial difference, they would require carefully reading the entire pipeline-body expression. They could easily miss the presence of a #, such as with v |> foo(blah, bar, partial(foo, x, # + z), blah), which is a topic-style expression.

To mitigate this uncertainty, v |> foo(blah, bar, partial(foo, x, y + z), blah) is an early error. This guarantees to the reader that, if the program compiles, it’s in topic mode, and there’s a # somewhere in the body expression.


The FSFX.chain function is a binary function:

chain :: (a -> FileSystem -> Fluture b) -> (FileSystem -> Fluture a) -> (FileSystem -> Fluture b)

Ah, so this is using Fluture. I’d have to study its API more to give a better translation of the original code block #116 (comment). But if nothing else, that could be accomodated by adding (#) or , #) to the end of each pipeline step, making it clear that it is a function call, rather than just an expression; and if the writer forgets any (#), they will know with an immediate early error. This is indeed a tradeoff—a disadvantage for a particular API style in return for advantages in several other API styles.

Eventually the smart pipe syntax might be extended with build-in higher-order forms of application like functor mapping, monadic chaining, and Kleisli composition; @tabatkins, @isiahmeadows, and I have discussed this before (http://logs.libuv.org/tc39/2018-03-13, #116 (comment), tc39/proposal-smart-pipelines#24). That idea is out of scope for now, though; I have my hands full enough with the core proposal’s Babel plugin.

The same goes for the code block in #116 (comment):

inputObservable
|> Obs.map(x => x * 2)(#)
|> Obs.flatMap(x => Obs.delay(1000, x))(#)
|> Obs.filter(x => x < 50)(#)
|> Obs.scan(x => y => x + y)(#)

Here again, this is a tradeoff—a disadvantage for a particular API style in return for advantages in several other API styles. (At the very least, an important footgun is avoided: if the writer forgets any ending (#), they will know with an immediate early error.)


That doesn't help you with x |> Iter.map(Int.add(1)); Int.add still needs to be curried.

As far as I can tell, x |> Iter.map(Int.add(1), #) would still work.


Regarding your three reasonable interpretations, I don't think any of those is reasonable except for the last one. a |> b === b(a) by simple lexical substitution implies that foo |> x.y() is x.y()(foo). I guess what seems intuitive is subjective, so perhaps it really is a significant problem for users that a |> b === b(a) suggests foo |> x.y() desugars to x.y(foo). I'm just one data point.

One thing I'll mention is that if we go with Smart Pipelines, there is less of a need to use curried functions. You could just have a normal n-ary functions with a placeholder.

There's many other examples of functional programming patterns that can not be accommodated, and even if they could be, would be restricted to within the body of the pipeline operator. Curried functions are a perfectly adequate solution to the problem of needing curried functions.

First-parameter function calls, last-parameter function calls, autocurrying-function calls, and non-function-call expressions are all used in JavaScript. All of these styles, not just the autocurrying style, exist and are idiomatic JavaScript. (To repeat examples above of JavaScript APIs whose functions’ “primary inputs” are first parameters: DOM fetch, ES new Uint8Array, DOM new WebSocket, Node fs.readSync, and pr’s fs.read.) Smart pipelines attempts to accomodate all of these common styles, while ensuring distinguishability between them.

Thanks for your patience, @masaeedu.

@masaeedu
Copy link
Author

masaeedu commented Apr 6, 2018

More importantly, there are many existing and idiomatic JavaScript APIs whose functions’ “primary inputs” are first parameters, such as ...

That's fine, but this is a different contention from saying someone would get confused into thinking x |> y()() is equivalent to y(x)() or y()(x), especially if the behavior of the bare style is clearly documented as a |> b === a |> b(#). We're not debating whether to desugar x |> y.foo() to some other, more popular idiom in JS; we're debating whether it should be illegal syntax, for fear of someone getting confused over what it means. Disallowing x |> y.foo() isn't doing anything to make usage of fetch/Uint8Array etc. any sweeter.

this is not true for applying to unary functions

Yes, it's not true that you're going to append it in strictly 100% of expressions, but most combinators of interest accept a parameter that inform their behavior, so as I said, you're basically going to be writing it every time. You can of course make a temporary variable with const myMap = map(add(1)) to turn these combinators into unary functions, and then do foo |> myMap, but this defeats the purpose of the operator.

Syntactic locality

Implicitly appending a # to the expression doesn't result in any decrease in syntactic locality, because you always have to read the entire expression and find the # to understand the meaning of the expression. It's impossible to reason about what the legal smart pipeline expression v |> foo(blah, bar, partial(foo, x, # + z), blah) means unless you read the whole thing and find the #. If you don't find one, it's not a giant mental leap to understand that the application is with respect to the entire expression.

Obviously there's a need to exercise good judgement to prevent a needle and haystack situation with really complex expressions, but the need for good judgement is applicable to all language features, and importantly, applies to the smart pipeline feature regardless of whether complex bare expressions are allowed or not.

As far as I can tell, x |> Iter.map(Int.add(1), #) would still work.

@mAAdhaTTah was suggesting that the need for curried functions is reduced by the existence of #-equipped pipelines. It isn't, as the need for a curried Int.add illustrates. To put it another way, writing your functions as x => y => ... is driven by a broader set of goals than can be solved in the pipeline operator proposal.

All of these styles, not just the autocurrying style, exist and are idiomatic JavaScript.

This is not relevant to what we're discussing. The fact that functions exist where you'd have to explicitly do x |> foo(#, 10) is undoubtedly true, and for these you'd just use topic style, exactly as shown. But it seems like a non-sequitur to disallow the bare style for complex expressions like x |> bar(10) => x |> bar(10)(#) simply because of the existence of other, more popular patterns, at no benefit to users of either style.

Overall, appending the (#) everywhere may not seem like a big problem, but a user of the functional programming idiom in JS tends to suffer from this kind of death by a thousand cuts of small, individually insignificant annoyances (async/await tied to promises, syntax noise in function application, now this (#) at the end of everything), simply because no one cares about that aspect of the experience. It's not inevitable that functional programming has to suck in JS, it's just a question of priorities.

Thanks for your patience

Likewise, @js-choi. Trying to respond to and accommodate all these different opinions must be like herding cats. 😄

@mAAdhaTTah
Copy link
Collaborator

Just to throw it out there, the flip side of this is we're somewhat struggling with the perception that the pipeline operator as an "FP feature", rather than multi-paradigm. I don't know if / how much it impacts this discussion, but it's something to bear in mind.

@TehShrike
Copy link
Collaborator

I've been sad because this language feature that I've seen in other languages and really want to have seems so obviously aimed at pure functions, but it seems like there's a movement to try to make it work with object methods, which doesn't make sense to me.

I don't go around trying to make class-related proposals seem more function-like, why should the function-related features have to bend themselves towards towards classes :-x

@dead-claudia
Copy link
Contributor

One other thing is that if the operator is idiomatically spaced (like x |> f(#) rather than x::f()), it encourages people to think of the function as fundamentally separate as an entity from the value it's operating on. This will inevitably box people into using it like an FP feature, whether you mean for them to see it that way or not. To draw a concrete example with the three main variants (F#-style, smart pipelines, and method-like chains:

// F#-style
// This is technically parsed as `x |> (f.g())`, not `(x |> f).g()
x
|> f
.g()

// Smart pipelines
// This could be parsed as either `x |> (f(#).g())` or `(x |> f(#)).g()`,
// as both are semantically equivalent.
x
|> f(#)
.g()

// Method pipelines
// This is technically parsed as `(x::f()).g()`, not `x::(f().g)()
x
::f()
.g()

@mAAdhaTTah
Copy link
Collaborator

it seems like there's a movement to try to make it work with object methods

The problem isn't trying to make the pipeline operator work with object methods; the problem is the bind operator (working on methods) and the pipeline operator (working on functions) are both fundamentally about pipelining / chaining, and there isn't an appetite for accepting both into the language. If we're going to solve "pipelining" as a use case, it has to be in a single operator.

@tabatkins
Copy link
Collaborator

Implicitly appending a # to the expression doesn't result in any decrease in syntactic locality, because you always have to read the entire expression and find the # to understand the meaning of the expression. It's impossible to reason about what the legal smart pipeline expression v |> foo(blah, bar, partial(foo, x, # + z), blah) means unless you read the whole thing and find the #.

Correct, but it's important that, in the current syntax, you know immediately that the expression is in topic-form, so you at least know that you do have to go looking for that # to interpret it. The alternative that you're suggesting would mean that you have to first check whether the # exists or not, as that would dramatically change the intention of the entire expression. (It's the difference between the top-level expression preparing an unary function to receive the pipelined value, or the top-level expression using the pipelined value directly; these are very different scenarios!) And it means that, in general, if you forget to put in the # (such as if, for example, your fingers are still used to typing Elm pipelines, and you're implicitly assuming it'll get passed in as the first argument), your entire expression is accidentally interpreted very differently at runtime.

@mAAdhaTTah was suggesting that the need for curried functions is reduced by the existence of #-equipped pipelines. It isn't, as the need for a curried Int.add illustrates.

"Reduced" is not "eliminated". F#-style encourages heavy usage of curried functions. Smart pipelines reduce this need substantially. There are still situations where you might want currying, of course.

(But that's just a question of how much weight you're willing to tolerate from lambdas, ultimately. Int.add(3) is longer than x=>x+3; with smart pipeline's PF feature, +>#+3 is even shorter. Where exactly you negotiate the explicit/terse trade-off can reasonably vary by person.)

@masaeedu
Copy link
Author

masaeedu commented Apr 7, 2018

@tabatkins I don't see a difference between "first check to see whether the # exists or not" and "looking for that # to interpret it". You always have to read the entire expression, beginning to end, and if you read the whole thing and there's no #, it's in its default position, i.e. the end.

"Reduced" is not "eliminated". F#-style encourages heavy usage of curried functions. Smart pipelines reduce this need substantially.

I either curry my functions, or I don't. So long as there's use cases that no other tool but partial application will solve, I need to keep currying my function definitions. As an example, I still need to keep map curried in order to be able to pass around the result of partially applying it. The pipeline operator doesn't solve this problem, or the many others for which partial application is intended, so it doesn't reduce the need to curry my functions at all.

@tabatkins
Copy link
Collaborator

I don't see a difference between "first check to see whether the # exists or not" and "looking for that # to interpret it". You always have to read the entire expression, beginning to end, and if you read the whole thing and there's no #, it's in its default position, i.e. the end.

The point is that reading code well, especially when there's the possibility of FP-ish shenanigans, requires you to know what the expected types of things are before you can start to interpret them. Otherwise you're just mentally tokenizing, not actually reading the code. There's a huge difference between the top-most expression constructing a function that'll get called for a value, and the top-most expression just evaluating to some value; your understanding of the expression as a whole changes pretty drastically in the two situations.

I either curry my functions, or I don't. So long as there's use cases that no other tool but partial application will solve, I need to keep currying my function definitions.

I think we're talking past each other. F# style encourages currying within the pipeline, so you don't have to write lambdas. Smart pipelines reduce this need.

And note that feature PF (the +> syntax) gives you partial application too.

@masaeedu
Copy link
Author

masaeedu commented Apr 7, 2018

There's a huge difference between the top-most expression constructing a function that'll get called for a value, and the top-most expression just evaluating to some value

When functions are values, which is the mental model for someone performing "FP-ish shenanigans", this is not such a huge difference. It's quite common to treat functions alternately as final results, intermediate values, and as transformers of other values (which again, may be functions), often all within the same expression.

Regarding knowing the actual types of things: the change proposed does not require knowing the type of things any more than the existing proposal does; it's a purely lexical transformation. Obviously understanding what values the expressions foo(#, bar) or baz(quux) => baz(quux)(#) produce actually requires knowing the types of things, but performing the desugaring baz(quux) to baz(quux)(#) requires zero understanding of the semantics of baz and quux.

I think we're talking past each other. F# style encourages currying within the pipeline, so you don't have to write lambdas. Smart pipelines reduce this need.

We are indeed. I still have to write my functions as lambdas, regardless of what shape smart pipeline assumes.

Imagine the following function:

const f = x => y => ...

, which I'd like to use like this:

v |> f(x)

The suggestion is that because I can do v |> f(x, #), I'm now free to substitute my API (for many, if not all f), with:

const f = (x, y) => ...

This is wrong. I still need to do const f = x => y => ..., regardless of what features the smart pipeline operator provides to me. Why?

Because there's way more places where I use partial application of the form:

hof(f(x))

In such places, the pipeline operator and its facilities for convenient partial application are useless to me. I need to pass a partially applied function to another function (a frequent occurrence in functional code), and my options are to either wrap things in a lambda right there, as in hof(y => f(x, y)), or to use some explicit method/syntax for partial application (bind/?).

@tabatkins
Copy link
Collaborator

When functions are values, which is the mental model for someone performing "FP-ish shenanigans", this is not such a huge difference.

As a functional programmer myself, I strongly disagree; I have to flip a mental switch to read something as manipulating a function versus just executing something. We might have to leave this as just us having different mental models of things.

However, given that most programmers use FP lightly (just passing around functions as callbacks), I think it's reasonable to state that for most programmers, there's a big mental difference when attempting to interpret an expression between something that results in a function that'll take a value and something that just uses that value directly.

We are indeed. I still have to write my functions as lambdas, regardless of what shape smart pipeline assumes.

Ah, sure, by "using currying" I was referring to actually partially applying a function, not just declaring your functions to allow it. So yeah, we're just using the terms differently. 👍

@masaeedu
Copy link
Author

masaeedu commented Apr 17, 2018

@js-choi In case you're interested, here's a codebase where I've been using the pipeline operator heavily, including in the x |> f(g) style. Almost all the JS files in there contain some usage, but a couple of examples that stood out to me are here and here.

@mAAdhaTTah
Copy link
Collaborator

@masaeedu Do you have any examples in that codebase where you use arrow functions within the pipeline?

@masaeedu
Copy link
Author

@mAAdhaTTah Here's an example of a somewhat large arrow function in the pipeline, here's a shorter one.

@mAAdhaTTah
Copy link
Collaborator

@js-choi Is expanding bare-style in Smart Pipelines to function like F# Pipelines currently do out of the question?

@masaeedu
Copy link
Author

masaeedu commented Apr 17, 2018

If I'm not misunderstanding, that second arrow function I linked to would need to be expressed as:

export const sequence = A => o =>
  pairs(o)
  |> Arr.map(([k, v]) => v |> A.map(embed(k))(#))(#)
  |> (A.of(empty) |> Arr.foldl(A.lift2(append))(#))(#);

It's not the end of the world, but it's just weird syntactic noise trying to match up the #-es to the expressions they belong to (and ignoring the extra parens it introduces). It'll get worse in long functions like the first one I linked to.

@tabatkins
Copy link
Collaborator

Correct, but that's because you're intentionally doing point-free stuff, instead of just calling methods on objects. Point-free is supported by this syntax, but not catered to.

If you rewrite it to actually use arguments, it should be clearer, but I honestly can't tell what this is doing in the first place to be able to rewrite it. This is not the sort of code that benefits from being even terser; it deserves to be separated into sub-statements and commented, imo.

@mAAdhaTTah
Copy link
Collaborator

I would argue point-free isn't supported at all by Smart Pipeline, as your points need to be made explicit through the # syntax.

@aikeru
Copy link

aikeru commented Apr 20, 2018

@babakness @masaeedu Still, seems less of a pain than adding a wrapper function to me...

//Given these two functions
const add2 = a => a + 2
const add = a => b => a + b
5 |> add2 // Legal in smart pipelines
5 |> add(2) // Not legal smart pipelines
5 |> add(#)(2) // Legal in smart pipelines

@masaeedu
Copy link
Author

@aikeru It's probably better, I'm not too good at desugaring the pipeline operator stuff myself. My point is that # itself is a pretty big footgun, with or without the change being proposed in this issue, because no sooner had I posted my comment than someone used it to shoot themselves in the foot. :)

@babakness
Copy link

babakness commented Apr 20, 2018

@aikeru Maybe using the # should require a modified pipeline operator. Don't take away my point-free functions.

const listener = onEvent => dom => fn => dom.addEventListener( onEvent, fn )
const T = x => f => f(x) // Thrush combinator
const onClick = listener('onclick')

selector('.foo')
  |> map( onClick )
  |> T( animate )

Edit: the original code was missing combinator

@aikeru
Copy link

aikeru commented Apr 20, 2018

@babakness
I have to infer from your example that the definition of map must be the following...

// Apologies if this isn't the way you'd format it
const map = (func) => (target) => func(target)
const listener = onEvent => dom => fn => dom.addEventListener( onEvent, fn )
const onClick = listener('onclick')

// You still have these options ...
// 1. Use it as-is
selector('.foo')
  |> map( onClick )( # )
  |> animate

// 2. Use a composed function
const mapToOnClick = map( onClick )
selector('.foo')
  |> mapToOnClick
  |> animate

@masaeedu
Copy link
Author

@aikeru The mapToOnClick approach is combinatorially expensive. The code is basically going to be deluged in one off variables names like mapTheFunction and mapTheOtherFunction, which the whole purpose of the pipeline operator was to avoid.

Regarding appending (#), yes, it doesn't look too bad in this trivial example, but it gets ugly and confusing very rapidly in more complex cases. See previous discussion in this thread. The proposed solution so far has been "change all your stuff to use methods instead", but there is often no way to use methods, or it is substantially more inconvenient than using closures.

@aikeru
Copy link

aikeru commented Apr 20, 2018

@masaeedu I expect chaining the pipeline operator is the most common use case. I expect nesting the pipeline operator would be used sparingly, and with as helpful formatting as possible, just like I see most people treat the ternary operator. Do you think otherwise?

@masaeedu
Copy link
Author

masaeedu commented Apr 20, 2018

@aikeru Yes, I certainly do. Outside of trivial examples, you'll very frequently have to put in functions that massage the arguments in various ways. Being able to use pipeline operator in there is important to avoid deeply nested parens.

It's very common to see JS code that does myArr.map((x, i) => ...).filter(x => ...), and the same standard should be applied to code that uses myArr |> map((x, i) => ...) |> filter(x => ...), rather than simply recommending users of the latter approach to explode this expression into lots of temporary variables.

@aikeru
Copy link

aikeru commented Apr 20, 2018

@masaeedu Okay, but that doesn't require nesting or anything of the sort, does it? Am I misunderstanding here?

myArr
  |> #.map((x, i) => ...)
  |> #.filter(x => ...)

Please don't reduce my side of the discussion to absurdity regarding temporary variables :)

@babakness
Copy link

@aikeru No because selector is returning a list, onClick expects a single dom element

@aikeru
Copy link

aikeru commented Apr 20, 2018

@babakness so your map function walks over an iterable, as opposed to just invoking a function on it? If so, okay, that makes sense. Seems like my previous suggestion still works. Anyway, hopefully I've made the point I intended to make. :)

@masaeedu
Copy link
Author

masaeedu commented Apr 20, 2018

@aikeru Yes, I think there's a misunderstanding. map is const map = f => arr => ..., so you have to write it as:

myArr |> map((x, i) => ...)(#) |> filter(x => ...)(#)

Now if I end up using |> inside map or filter (see above for examples of this), I end up with:

myArr |> map((x, i) => ...)(#) |> filter(x => x |> isUsernameValid(#) |> isPasswordValid(#))(#)

this in itself is a rather trivial example, and things only get uglier and more ambiguous from here on.

@babakness
Copy link

In short, # is a great option but not a good enforcement.

@aikeru
Copy link

aikeru commented Apr 20, 2018

@masaeedu That is somewhat clearer, but not entirely. It would help if you fleshed out these other little functions a bit more.
It seems like x is a user or something, which is piped into isUsernameValid. Does piping the result of that into isPasswordValid mean isUsernameValid returns the same user?

@masaeedu
Copy link
Author

@aikeru I just pulled those names out of a hat, but imagine each is User? -> User?, with the user being returned if it's valid, and undefined otherwise. I'm not looking to get you to refactor some business logic, I'm trying to demonstrate that using the pipeline operator nested is syntactically ugly.

@dead-claudia
Copy link
Contributor

My other reservation remains: when you're creating a long pipeline, it's very useful to have a name somewhere in the pipeline. Even if you just reuse the same name as in an earlier function, having a name in of itself that's descriptive is helpful.

@babakness
Copy link

babakness commented Apr 20, 2018

@isiahmeadows solid point.

@mAAdhaTTah

FWIW, As I think about it I feel split.

Having topics style is really great. Using it with TypeScript you'd get type checks. Currently, TypeScript doesn't do a good job with point-free functions.

On the other hand, it really needs its own operator. You are either using # or not; instead of hunting around for # on a line, a |: or +> at the beginning signals clearly what is going on. If anything, not using # with said operators should be a friendly error. "Didn't you mean to provide a placeholder?"

Maybe the issue is in bundling them together. Having two clean proposal is the way forward. I understand that it is difficult for people to understand it all. The logistics of it is something I can appreciate.

@mAAdhaTTah
Copy link
Collaborator

TypeScript does fine with point-free functions. It's the automatic currying of some libraries that it struggles with. Writing this is clear:

const add = a => b => a + b

This is harder:

const add = R.curry((a, b) => a + b)

If anything, not using # with said operators should be a friendly error. "Didn't you mean to provide a placeholder?"

This is basically what Smart Pipelines do now, but without needing two related but slight different operators. We're unlikely to get multiple variants of the idea of pipelining through committee. This is part of the reason the bind operator stalled; pipeline operator handles the bind's pipelining features.

@dead-claudia
Copy link
Contributor

dead-claudia commented Apr 20, 2018

@babakness @mAAdhaTTah

TypeScript does fine with point-free functions. It's the automatic currying of some libraries that it struggles with.

Ref: microsoft/TypeScript#5453

Just to elaborate on that: TypeScript sucks with variadic functions in general. Currying is merely a subcase of that particular meta-issue. (For one, they can't even fully type JS's existing builtins that have existed since ES3, much less more complex stuff like currying...)

@babakness
Copy link

babakness commented Apr 20, 2018

@mAAdhaTTah

What @isiahmeadows said.

const add = ( a: number ) => ( b: number ) => a + b
pipeline(
   'orange juice',
   add(1),
)

No complaints

I've even tried typed functional libraries like fp-ts but still had issues where TS failed with the whole pipe / compose thing. Or where data is piped between map calls, etc. This is why I look forward to this syntax going through. MS will have to deal with it then.

The Monad issue will still be there but at least composing will be a lot better.

@mAAdhaTTah
Copy link
Collaborator

@babakness We're all saying the same thing. This is point-free and correctly errors:

const nums = ['1', '2', '3']

const add = (a: number) => (b: number) => a + b

nums.map(add(1))

The automatic currying I mentioned is a specific case of variadic functions (and an area I ran into issues myself using Flowtype). My point was just that # doesn't have any impact on whether it can be typed or not. 1 |> add(1) (F#) or 1 |> add(1)(#) (Smart) should typecheck the same.

@dead-claudia
Copy link
Contributor

BTW, where I'm reworking my lifted pipeline proposal, smart vs normal pipelines are no longer a concern for me in that area. I just need to know which syntax is chosen, so I know which syntax to tailor it to.

(I'm still partial to either the F# or this-binding variants.)

@js-choi
Copy link
Collaborator

js-choi commented Apr 20, 2018

Sorry for the delay in responding to the comments over the past two weeks. I’ve been busy out of town, and I’m unfortunately still busy with a major transition, so I’ve had to disengage for the past weeks. A lot of discussion has been happening here. That’s a good thing, and I am thankful, but it’s also overwhelming, so my apologies if I can’t respond to everything here, at least for now. I want to focus on developing the Babel plugin instead during my free time.

I do want to address this question, though:

@js-choi Is expanding bare-style in Smart Pipelines to function like F# Pipelines currently do out of the question?

Nothing in the smart-pipelines proposal is out of the question. I think it is premature to rule completely in favor one way or another way.

I do indeed have reservations about distinguishing bare style and topic style purely by the presence/absence of the topic reference—which would make garden-path expressions, requiring indefinite lookahead, more likely. I came to this conclusion in March after rewriting a lot of real-world codebases using pipelines, playing around with various possibilities. But no one cannot confidently claim that one or another thing is better until we are able to test it hands on, ideally with a Babel plugin.

I still plan to develop such a Babel plugin together with @mAAdhaTTah, despite my reduced free time. And I do plan to implement many possible variations of smart pipelines, so that all of them may be tried by switching configuration options.

The plugin development will take a while, and it would be understandable for anyone to be frustrated by such a slow pace. But all of us – @mAAdhaTTah, I, @littledan, TC39, and everyone commenting here – are volunteers in this process. And until that Babel plugin is written, or until someone else tests the different pipelines on a corpus of actual real-world code, all of this discussion is theoretical. I, @mAAdhaTTah, @littledan, and Yulia Startsev of Mozilla have also been discussing running usability studies on JavaScript developers or real-world code in many coding styles, though those too would not occur soon.


As an aside, many of the concerns expressed above, regarding the readability of nested pipelines, are mitigated by simply avoiding nested pipelines. Just because you can do something does not mean you should. Smart pipelines, variables/constants, and functions all should be used as necessary. Actually writing the example in #116 (comment) (const fib = x => x |> ( # < 2 ? # : fib( # - 2 ) |> fib( # - 1 ) )) as a one-liner pipeline is not a good idea, just as you would not use the F-sharp-style pipeline proposal to write const fib = x => x |> ( y => y < 2 ? y : ( fib(y - 2 ) |> ( y => fib(y - 1) ) ). This is a bad idea with either pipeline proposal. You should just use const fib = x => x < 2 ? x : fib(x - 1) - fib(x - 2).

In general, any feature can be used to write unreadable code.

(Avoiding footguns is still important. In fact, the current smart-pipeline proposal’s early-error rules guarantee that, if that fib function’s code successfully compiles, then none of the pipelines accidentally omitted a #. const fib = x => x |> ( # < 2 ? # : fib( # - 2 ) |> fib( x - 1 ) ) contains an easy-to-miss bug. If there was no early error and it successfully compiled, then the bug may silently cause more bugs very late during runtime. But that doesn’t mean that pipelines should have been used here in the first place, in any case.)

It is true that many of the real-world examples in the readme do use nested pipelines. But all of the examples using them are formatted and indented clearly, and the early-error rules guarantee to the reader that. None of them are like the example in #116 (comment) – at least intentionally. I certainly could rewrite the real-world examples in the readme to deemphasize nested pipelines. I do not want to mislead people that using as many nested pipelines as possible would be “best practice”. There is also the idea to forbid nested pipelines at all, as Clojure does with its #(…) expression.

But this is just an aside. Until that Babel plugin is written, or until someone else tests the different pipelines on a corpus of actual real-world code, all of this discussion is theoretical.

Thanks to everyone for your patience.

@masaeedu
Copy link
Author

You should just use const fib = x => x < 2 ? x : fib(x - 2 ) |> fib(x - 1).

@js-choi Maybe I'm misunderstanding, but I think const fib = x => x < 2 ? x : fib(x - 2) |> fib(x - 1) is illegal. This issue is about the fact we're not allowed to have fib(x - 1) on the RHS of the pipeline operator.

@js-choi
Copy link
Collaborator

js-choi commented Apr 20, 2018

Oh, whoops, I forgot to finish rewriting that code. I meant, of course, “You should just use const fib = x => x < 2 ? x : fib(x - 1) - fib(x - 2).” Thanks for pointing that out; I’ve edited it.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 12, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants