Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different namespaces for iterators/procs are a design mistake #8901

Closed
Araq opened this issue Sep 7, 2018 · 48 comments
Closed

Different namespaces for iterators/procs are a design mistake #8901

Araq opened this issue Sep 7, 2018 · 48 comments

Comments

@Araq
Copy link
Member

Araq commented Sep 7, 2018

In today's Nim iterators and procs can have the same name and type signature. The idea was that iterators can only be used in for loop contexts anyway and procs cannot. However, with the addition of closure iterators this is not true anymore.

It also leads to confusing error messages, like "Undeclared someIter" when "someIter" clearly was declared.

The type operator has a special rule so that the "iterator" operation is preferred over the "proc" operation interpretation, so that toSeq and its siblings can work. Given toSeq, we don't need to be able to overload proc splitLines and iterator splitLines, the iterator version is good enough, when a seq is required from splitLines one can write toSeq splitLines(...) instead. Yes, it's slightly more typing effort, but Nim becomes easier to understand, weird edge cases are eliminated, the spec gets simpler.

@Araq Araq changed the title Different scoping rules for iterators/procs are a design mistake Different namespaces for iterators/procs are a design mistake Sep 7, 2018
@andreaferretti
Copy link
Collaborator

I always assumed they shared namespace, so I did not even ever notice they live in different namespaces: I never tried to overload a proc with an iterator of the same name and signature :-)

@narimiran
Copy link
Member

we don't need to be able to overload proc splitLines and iterator splitLines, the iterator version is good enough

In that example, proc splitLines would cease to exist?
And all other procs that have the same name as iterators (for strutils that is: split, splitWhitespace, rsplit and splitLines)?

Wouldn't that cause really a lot of breaking?
(I personally don't mind since the language is still in the pre-v1.0, but—unlike some other people—I don't have any large projects written in Nim)

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

In that example, proc splitLines would cease to exist?
And all other procs that have the same name as iterators (for strutils that is: split, splitWhitespace, rsplit and splitLines)?

Yes, indeed.

Wouldn't that cause really a lot of breaking?

There would be a switch like --namespaceForIterators:on for backwards compat, just like we got --laxStrings:on and --nilseqs:on. We can collect these switches under a --version: 0.18 switch too.

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

As a first step, we would do:

proc splitLines(...): seq[string] {.deprecated: "use sequtils.toSeq splitLines() instead".}

to prepare everybody for this change.

@metagn
Copy link
Collaborator

metagn commented Sep 7, 2018

What about `..`? Would we use newSlice instead?

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

@hlaaftana I totally missed that overload. This needs some serious thinking. :-(

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

Would we use newSlice instead?

Or maybe toInterval. As a Slice constructor it's not used often. That array also uses this syntax can be special cased in the compiler.

@LemonBoy
Copy link
Contributor

LemonBoy commented Sep 7, 2018

What about ..?

Let .. die, I spent a lot of time wondering why my code wasn't working only to discover that it's just an alias for countup and that if a > b you get nothing out of it.

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

Let .. die, I spent a lot of time wondering why my code wasn't working only to discover that it's just an alias for countup and that if a > b you get nothing out of it.

This is covered in the tutorials. And why would it iterate over something if a > b anyway.

@LemonBoy
Copy link
Contributor

LemonBoy commented Sep 7, 2018

This is covered in the tutorials.

That I didn't read :)

And why would it iterate over something if a > b anyway.

Because I expected it to be some kind of smart-ass range iterator, using countup and countdown explicitly is much better IMO (also from a stylistic point of view, using .. and countdown doesn't look that good :)

@kaushalmodi
Copy link
Contributor

Oh, I love the .. syntax. Please don't let that die.

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

Oh, I love the .. syntax. Please don't let that die.

Just to clarify, under this proposal for i in a..b would continue to work, but let x = a..b would become let x = toInterval(a, b) or similar.

@kaushalmodi
Copy link
Contributor

So the HSlice creation using .. is going away? None of the snippets with .. (esp Code Snippet 17) in https://scripter.co/notes/nim/#hslice will work?

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

@kaushalmodi unfortunately not, unless we can come up with a better solution.

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

Sorry, I'm wrong, all these can continue to work if HSlice has an items iterator instead.

@metagn
Copy link
Collaborator

metagn commented Sep 7, 2018

In that case, ..< will always subtract 1 and use <= instead of using < unless there was a distinct type like PreviousIndex or UntilIndex similar to how BackwardsIndex works, but it would be obnoxious because 0 .. <a.len syntax is deprecated already.

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

In that case, ..< will always subtract 1 and use <= instead of using < unless there was a distinct type like PreviousIndex or UntilIndex similar to how BackwardsIndex works, but it would be obnoxious because 0 .. <a.len syntax is deprecated already.

No, why? There is no ..< slice constructor, it's only an iterator.

@metagn
Copy link
Collaborator

metagn commented Sep 7, 2018

There is this:

Nim/lib/system.nim

Lines 3510 to 3512 in e81fe6d

template `..<`*(a, b: untyped): untyped =
## a shortcut for 'a .. (when b is BackwardsIndex: succ(b) else: pred(b))'.
a .. (when b is BackwardsIndex: succ(b) else: pred(b))

@Araq
Copy link
Member Author

Araq commented Sep 7, 2018

Ok, you know the stdlib much better than I do. :-)

That's a bit of a problem indeed then.

@cooldome
Copy link
Member

cooldome commented Sep 7, 2018

While I like the proposal, one performance concern.
For iterator you don't know the length upfront so you can't preallocate for seq you can.

@timotheecour
Copy link
Member

timotheecour commented Sep 8, 2018

For iterator you don't know the length upfront so you can't preallocate for seq you can.

I've been meaning to write a proper post on that (later..), but briefly: I think we can make (inline) iteratos be 1st class entities, allowing things like:

assert type(iter) == iterator[int] # instead of int (there's a way to make this work without breakage, possibly using a different name eg`typeGeneralized`)
iterator iter2 = iter # can be passed around, while remaining inline

Furthermore, it'd allow to extend (in user code) the iterator concept reusing existing Nim features, bridging some of the gap with D's ranges.
Eg, it'd allow library/user code to define custom iterators with:

  • len attribute (allows more efficient toSeq iterWithLength that you're referring to)
  • an infinite bool attribute (for infinite iterators)
  • a bidirectional bool attribute (for iterators that can be iterated over in either direction, eg allows reverse on bidirectional iterators)
  • a random access attribute (note: this is different from a seq as it doesn't even need to be backed by memory eg, see D's iota: https://dlang.org/library/std/range/iota.html

@dom96
Copy link
Contributor

dom96 commented Sep 8, 2018

Wow. I'm really going against the consensus here, but I dislike this.

Are you guys really okay with no longer being able to write "1 2 3 DATA".toLower().split(' ')? Unless we gain the ability to chain iterators which AFAIK isn't possible right now.

@GULPF
Copy link
Member

GULPF commented Sep 8, 2018

IMO the current namespace rules cause a lot of confusion, with very little benefit. It's completely non-obvious that the following calls to foo might actually call different routines.

let x = foo(y)

for z in foo(y):
  discard

I hope this RFC will be implemented.

@zah
Copy link
Member

zah commented Sep 9, 2018

Let's turn this into a complete RFC. I suggest the following changes in the type system surrounding inline iterators and the semantics of closure iterators:

  1. There is a type called Iterator[T] (this already exists and it's usually written iterator: T). Within the compiler, we use tyIter[T] for it. Instance of this type represents a stream of values.

  2. An inline iterator can be consider a factory function returning an Iterator[T]. In other words countup(a, b) is consider to have the signature proc countup(a, b: int): Iterator[T]. It has the same control-flow semantics as if it was written like this, but it can be still fully inlined:

proc countup(a, b: int): itarator: T =
  result = iterator () {.closure.} =
    var res: T = T(a)
    while res <= b:
      yield res
      inc(res, step)
  1. You can use the lifted definition above to instantiate countup as a closure iterator.
var myIter = countup(1, 2)
echo myIter() # some(1)
echo myIter() # some(2)
echo myIter() # none()

The API above uses option types and perhaps that's controversial (for Araq). Alternatively, step 3 can return a default value and raise a finished flag.

  1. You can define higher-order iterators:
iterator map[A, B](iter: iterator[A], f: proc(a: A): B): B =
  for elem in iter: yield f(elem)

When instantiated with inline iterators, these definitions are expanded to efficient inline code.

  1. The name of a closure iterator is no longer an instance factory.

The current semantics of closure iterators are not well understood by many and potentially quite confusing. Consider this snippet and take some time to understand it:

proc foo(start: int) =
  iterator bar {.closure.} =
    var i = start
    yield i
    inc i
    yield i

  var x = bar # this creates one instance of the iterator
  echo x()
  var y = bar # this creates another fresh instance
  echo y()

The output is

10
10

The explanation is that every time you reference the name of a closure iterator, you create a fresh new instance of it. This is in contrast to regular closures where you can reference the name of an inner proc many times and you'll always get the same closure object. If closure iterators were behaving exactly as regular closures in this regard, I think the language would be more consistent and intuitive for newcomers.

  1. Remove the support for arguments for the closure iterators.

This is another feature that is confusing for a lot of people and it doesn't add any expressivity to the language, because you can always model it with additional captured variables. Furthermore, having an iterator param requires you to update the value at each step, while updating a captured value can be done on-demand when that's actually required.


Points 5 and 6 are breaking changes, but I think the currently confusing semantics are so rarely activated that most of the code wouldn't require any changes. Using the Option type as suggested in point 3 would be a more significant breaking change that can be discussed here.

@Araq
Copy link
Member Author

Araq commented Sep 9, 2018

My RFC was about making the language simpler, yours is yet-another type extension.

This is another feature that is confusing for a lot of people and it doesn't add any expressivity to the language, because you can always model it with additional captured variables.

Captured variables can introduce a copy and avoiding it means you need to move the data into the iterator's state and then move it out again after the iteration. Not exactly an elegant design IMO.

@zah
Copy link
Member

zah commented Sep 10, 2018

Well, my point is that the Iterator[T] types already exist. We only need to recognize them and handle them properly in more situations (e.g. in the higher-order iterators). I would argue I'm making the type system simpler, because I'm limiting tyIter to a single child (currently, it looks like a tyProc type).

Can you provide an example where a captured variable would produce different code than an interator param? Having an iterator param also expands the iterator's state. The two approaches look the same to me.

@zah
Copy link
Member

zah commented Sep 10, 2018

Actually, I agree that the generated code is different enough for closure iterators with parameters and they can actually have different expressivity when you return them from functions and their parameters get updated far from the place where they were created. Point 6 can be retracted, but it's relatively independent from the rest of the proposals.

@Araq
Copy link
Member Author

Araq commented Sep 10, 2018

There is a type called Iterator[T] (this already exists and it's usually written iterator: T). Within the compiler, we use tyIter[T] for it. Instance of this type represents a stream of values.

It's not clear what you mean here, is this

a) tyIter instead of tyProc
b) tyIter[ResultType], a wrapper for the iterator's result type.

@zah
Copy link
Member

zah commented Sep 10, 2018

tyIter[T] means "started iteration producing T values".

It's important to recognize that an inline iterator is a factory function for creating such values. Once you create the iterator with countup(1, 10) for example, each next step doesn't require any input, so you can pass the iterator to algorithms expecting Iterator[int] (such as my map example).

tyIter[T, A] on the other hand would be "started iteration producing T values, but expecting a single A value at each step".

@Bulat-Ziganshin
Copy link

From a pure theoretical standpoint, your Iterator[T] type should be named Stream[T]. So, we can say that iterator is a procedure that returns a stream.

And, similarly to Stream[T], it would be nice to have Lazy[T] that lazily returns a single T value. Actually, Lazy[T] = proc(): T, but we may add some extra sugar to simplify work with lazy values.

Also, sorry, I don't remember the manual details - can we just make a single iterator type and rely on compiler to decide whether it should use inline or closure implementation? I.e. have closure official semantics and allow compiler to optimize it in simple cases?

@Araq
Copy link
Member Author

Araq commented Sep 11, 2018

It's important to recognize that an inline iterator is a factory function for creating such values. Once you create the iterator with countup(1, 10) for example, each next step doesn't require any input, so you can pass the iterator to algorithms expecting Iterator[int] (such as my map example).

That's a wrong way to design things IMO, if we were to redesign this from scratch.

Firstly, the x in for x in countup(...) is not of type Iter[int], it is int, so any factory that countup produces needs to be skipped by the for statement. Secondly, this still ties allocation too heavily to the type. Instead we need a more low level design, like:

type
  BaseIter {.inheritable.} = object
    state: int 

proc hasNext(it: BaseIter): bool = it.state >= 0

An iterator like countup then produces code like:

type 
  CountupIter = object of BaseIter
    v, a, b: int

proc next(it: var CountupIter): int =
   discard "state machine here"

The for loop for x in countup(1, 3) is then rewritten to:

var it: CountupIter
it.a = 1
it.b = 3
it.state = 0
whlie hasNext(it):
  let x = next(it)

Which a good optimizer can optimize to the old for loop as it only touches stack slots. (We can also do this optimization in the Nim compiler.) But the CountupIter type would be accessible to the programmer, it can be serialized, you can also allocate it on some heap...

It unifies closure iterators with "inline" iterators, allows for iterating over 2 data structures simultaneously, only uses the compiler to write these state machines for us, fits "systems programming" better.

@mratsim
Copy link
Collaborator

mratsim commented Sep 11, 2018

Araq's design is very similar to Rust's and Python 2 iterators:

An iterator type or even concept just requires an init and a next proc.

As shown by @alehander42 in zero-functional benchmarks, this get compiled in very efficient code even when chaining iterators, it is even faster than Nim's with macro fusing the iterators.

Even in Arraymancer for efficient parallel iteration on multiple tensors I had to use the following design:

  • an init iterator template
  • a yield template
  • an advance iterator template
    See here. This is scalable to a variadic number of tensors arguments (albeit I have some varargs[typed] issues at the moment).

It is key that an inline iterator doesn't allocate otherwise it's slow, completely incompatible with multithreading and also cannot be used in embedded devices. The following doesn't work:

  1. An inline iterator can be consider a factory function returning an Iterator[T]. In other words countup(a, b) is consider to have the signature proc countup(a, b: int): Iterator[T]. It has the same semantics as if it was written like this:
proc countup(a, b: int): itarator: T =
   result = iterator () {.closure.} =
   var res: T = T(a)
   while res <= b:
     yield res
     inc(res, step)

@zah
Copy link
Member

zah commented Sep 11, 2018

@Araq, of course the for loop must unpack the iterated values. This should be true even with the closure iterators:

var iter = iterator: int {.closure.} =
  yield 1
  yield 2
  yield 3

# The type of `iter` is iterator: int right now

for el in iter: 
  echo el # the type of `el` is int as expected

I was actually expecting that Nim already works this way and I just found out this is not the case.

@Araq seems to be arguing that Iterator[T] should not be a specially handled built-in type, but rather a concept. This looks fine at the surface, but breaks down a bit when you consider how inline iterators are actually compiled.

You cannot efficiently implement the next function without placing a duff's device inside its body. Everyone can try it in their heads with the simple definition with 3 yields above. Trusting the compilers to optimize away nested invocations of functions having duff's devices inside sounds too optimistic to me. I think we should rather stick with our own inlining transformations.

So, if Iterator[T] is something that potentially can trigger the inlining transforms, and it can also be lifted to a first class value by being compiled to a closure iterator, why wouldn't this be enough? We can also allow the user to implement the same compile-time and run-time contract with regular object types (we can say that an object with a () operator can substitute a closure in certain situations and an object with a next proc can substitute a closure iterator). Yesterday, I was thinking how to resolve the differences between iterators with params and iterators with captured values and I came up with a scheme for closure types with exposed state (which allow you to access the captured variables):

type
  # Applying the `with` operator to a closure type allow you to obtain
  # a more specific closure type that has a specific captured state.
  MyClosure = proc(x: int): int with (string, int)
  MyIterator = iterator: string with (Foo, Bar)

var c: MyClosure = ...
c[0] = "some string" # You can read and modify the values captured by the closure

To use your own example, we don't want to end up in a situation where pragmatists avoid closures and iterators, because they may end up needing to serialize the values eventually (just like in C++ and Java people always write getter functions in case they become necessary). Removing such trade-offs in the language is the better way to design things IMO and the suggested types above are one way to do it.

To re-iteratate the main point, not having specially handled inline iteration values sounds too risky to me. We can have the Itearator[T] type for them and then remove the rest of the trade-offs by "opening" a bit the first-class iterator values to the user.


@mratsim. you seem to be confused that I suggest that inline iterators are compiled as closure iterators. This is not the case, I was merely explaining that the control-flow matches what the programmer would expect from the equivalent closure iterator. Inline iterators are still compiled to efficient inlined code without allocations.

@Bulat-Ziganshin
Copy link

Araq's design is very similar to Rust's and Python 2 iterators:

my first thought was is that he copied Lua iterators intact. But I don't know much Python/Rust

It is key that an inline iterator doesn't allocate otherwise it's slow, completely incompatible with multithreading and also cannot be used in embedded devices

I also want to be able to use iterators in hard-optimized, low-level code which now I write in C++. So f.e. for i in 0..100 should be compiled to for(i=0;i<=100;i++) or equivalent.

OTOH, I think that Nim has too much small practical features, which are better be replaced by fewer more fundamental ones. In particular, we can implement iterators in the follwoing ways:

  • Lua-style, iterator is an object with the next method
  • inlined style as currently implemented
  • inversion-control style - iterator is a procedure that receives loop body as the lambda parameter
  • coroutine style - iterator is asymmetric stackful coroutine with control transferred forth and back by call/yield pair

Now, while we have good amount of implementation approaches (and non-C backends may add their own, in particular based on particular VM features!), we are better to find single concept and syntax that can cover them all, while ensuring that specific optimizations are possible, f.e. via pragmas.

So, I propose the following plan:

  • develop semantics for iterators that we want to have implemented
  • develop a single, universal language spec for iterators
  • discuss how developer can ensure that his specific iterator will be implemented with concrete efficient semantics (similar to current pragmas specifying the iterator implementation); or what to do if that's impossible

@Bulat-Ziganshin
Copy link

Bulat-Ziganshin commented Sep 11, 2018

Now, my first attempt to iterator proposal:

concept Iterator[T,T1,T2...] = lambda(T1,T2...): Maybe(T)

So, iterator is either a procedure with internal state (i.e. lambda), or an object with method next having the same signature.

iterator keyword automatically converts yield-based syntax into some object obeying Iterator concept:

iterator range(i,j:int): int =
  int x=i
  while x<j:
    yield x++

is translated to

proc range(i,j:int): Iterator[int] =
  int x=i
  return proc(): int = 
    if x>=j: return Nothing
    return Just(x++)

@zah
Copy link
Member

zah commented Sep 11, 2018

I should also point out that Araq's transformation is not equivalent to what Python is doing. Consider the following snippet:

def generator(a, b):
    print("iteration started")
    while a < b:
        yield a
        a = a + 1
    print("iteration ended")

g = generator(1, 5)
print("iterator created")

for e in g:
    print("for loop body start")
    print(e)
    print("for loop body end")

print("program done")

The output is:

iterator created
iteration started
for loop body start
1
for loop body end
for loop body start
2
for loop body end
for loop body start
3
for loop body end
for loop body start
4
for loop body end
iteration ended
program done

Please notice how "iterator created" is printed before "iteration started" - this means that creating the iterator is a separate step from starting the execution of its body, which happens only after the fist call to next (effectively called by the for loop). Now consider an iterator that may not produce any values such as this one:

def iterateNodeChildred(node):
    for e in node.sons:
        yield e

It won't produce values if you pass in a node without any sons. In order for hasNext to be able to answer with false here, it really must try to advance the iterator, so its real name should be advance or next and the concept can look like this:

type
  Iterator[T] = concept it
    it.next() is bool # try to advance the iterator to get to the next value
    it.value is T     # read the reached value as a cached field

... or you can use the next(): Option[T] signature that I suggested.

Using such a concept may be optimized just fine for simple loops, but I doubt that it will produce efficient code when higher-order iterators are created. Anyone can prove me wrong by writing the classic algorithms such as map, filter, etc by hand and then demonstrating the godbolt print out.

Also, it was suggested that zero-functional is using Araq's transformation to achieve its high performance, but that's not the case. Here is an example taken from the README:

var n = zip(a, b) -->
            map(f(it[0], it[1])).
            filter(it mod 4 > 1).
            map(it * 2).
            all(it > 4)

It's compiled to:

(proc (): auto =
  var minHigh134598 = min([high(a), high(b)])
  var empty = true
  for z in low(a) .. minHigh134598:
    var it0 = (a[z], b[z])
    var it1 = f(it0[0], it0[1])
    if it1 mod 4 > 1:
      var it2 = it1
      var it3 = it2 * 2
      result = true
      if not(it3 > 4):
        return false)()

So, it's doing inlining, only in user space and with a limited set of algorithms.

@mratsim
Copy link
Collaborator

mratsim commented Sep 12, 2018

Araq's design is just a simple state machine. That is very efficient.

Another look into Stream/iterators in Rust (state-machine based) and Haskell (coroutine-based): https://www.fpcomplete.com/blog/2017/07/iterators-streams-rust-haskell

@zah
Copy link
Member

zah commented Sep 12, 2018

@mratsim, from your own citation:

Why are Rust iterators slower than looping?

If you remember, there was a substantial slowdown when going from Rust loops to Rust iterators. This was a bit disappointing to me. I'd like to understand why. Unfortunately, I don't have an answer right now, only a hunch. And that hunch is that the double-inner-loop problem is kicking in. This is just conjecture right now.

The reality is that the "simple state machine" needs the compiler to perform inter-procedural switch statement elimination to be able to remove the state variables and compile the code as regular loops. This may happen if you are lucky and the functions got inlined, but not otherwise.

@Araq Araq added the Postponed label Sep 14, 2018
@Araq
Copy link
Member Author

Araq commented Sep 14, 2018

The fact that currently ..< etc are currently both iterators and templates/procs makes this change unresonably time-consuming to roll out. Postponed.

@Bulat-Ziganshin
Copy link

Bulat-Ziganshin commented Sep 15, 2018

@Araq what about replacing iterator .. with iterator items(HSlice[T,U])?

Overall, we can replace each existing iterator with a procedure building temporary object, and mulitude of procedures consuming this object for various types of iteration.

In particular, in addition to (or instead of) items(HSlice) we can provide Lua-style iterator over HSlice values.

Overall, in the for VARS in EXPR statement, we should check the type of EXPR - if it implement the Iterator concept (i.e. next/value procs), it can be used to implement the loop. If EXPR type implements IteratorProc concept, i.e. supports proc doloop(EXPR:T; proc step(): bool) then for statement can be implemented by executing doloop with passing loop body closure as the step parameter.

If EXPR doesn't implement any supported Iterator concept, we should replace it with items(EXPR) or pairs(EXPR) and check again.

This proposal essentially turns iterator declarations into the syntax sugar for declaring objects that implement one or both Iterator concepts. One can also implement these concepts natively, allowing to implement iterators w/o use of iterator keyword.

UPDATE: As I see, the iterator items(HSlice) already exists. So, it seems that we can just drop the .. iterator immediately. After doing the same with other existing iterators we will be able to join iterator and procedure namespaces. While support of more efficient iteration protocols in for is the separate topic.

Finally, we can recommend user libraries to do the same if they need to use the same name both as an object and an iterator.

@Araq
Copy link
Member Author

Araq commented Sep 15, 2018

Overall, we can replace each existing iterator with a procedure building temporary object, and mulitude of procedures consuming this object for various types of iteration. In particular, in addition to (or instead of) items(HSlice) we can provide Lua-style iterator over HSlice values.

Yeah I know, but the last time we did something very similar with how ^ and backwards slicing works it turned out to be a time-consuming task with plenty of regressions.

@metagn
Copy link
Collaborator

metagn commented Sep 15, 2018

It should also be noted that GCC does not optimize Slice[int].items to what .. does and runs 24 times slower according to this benchmark on TIO

@pietroppeter
Copy link
Collaborator

Since this is apparently staying with us, we might as well think of it as a feature. 😄

This would allow e.g. keys of a table to be used directly as a sequence (see also #13595), as one creative user did here:

import tables, algorithm

var table = init_table[string, int]()
table["b"] = 2
table["a"] = 1

# Proc with same name as Iterator
proc keys*[K, V](table: Table[K, V]): seq[K] =
  for k in table.keys: result.add k

# Nim properly resolves `keys` as `proc` and not as `iterator`
echo table.keys.sorted

@al6x
Copy link

al6x commented Aug 27, 2020

Hi guys, as @pietroppeter mentioned I discovered this issue today :).

I tried table.keys.sorted and was surprised it didn't work, that was especially strange as I saw keys in the docs of tables module.

+1 for keeping this feature until there will be a better way to support function composition like chaining etc. Being able to compose functions is important feature, would be sad to loose it.

@Araq
Copy link
Member Author

Araq commented Aug 28, 2020

I still prefer a unification of procs and iterators but ok, you have a point.

@pietroppeter
Copy link
Collaborator

as remarked by a user over reddit, in Nim official tutorial we actually have a sentence that says: "it is common practice to wrap iterators in procs of the same name which accumulate the result of the iterator and return it as a sequence, like split from the strutils module."

@timotheecour
Copy link
Member

timotheecour commented Feb 4, 2021

@pietroppeter PR welcome to change this sentence from the tutorial to the opposite, now that we have toSeq:

It is an anti-pattern to wrap iterators in procs of the same name which accumulate the result of the iterator and return it as a sequence, since we can instead use toSeq(myIter()) which avoids bloating API's.

best practices change as nim evolves.

@pietroppeter
Copy link
Collaborator

@timotheecour to be honest I actually kind of like this feature. I added the remark so that we have this tracked if at some point we decide to change. As it stands now, I think the sentence is not that incorrect: it does happen in stdlib to have proc overload for iterators. And if you were to declare it an anti-pattern you would have for consistency to at least deprecate its uses in the stdlib which, this issue being closed, is not going to happen. if I were to change I would go with something like:

It is possible to wrap iterators in procs of the same name which accumulate the result of the iterator and return it as a sequence, like split from the strutils module. An alternative way to obtain a sequence from an iterator is to use toSeq template from sequtils.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests