Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Fixes #93 - Editor cut-off value is now applied per-evaluation #98

Merged
merged 1 commit into from

2 participants

@dotta
Owner

review by @dragos

@dotta
Owner

I'm not really satisfied with the current implementation. Please let me know if you have any tip...

...aide/worksheet/runtime/IncrementalDocumentMixer.scala
((4 lines not shown))
- def pruneOutput(newText: String): String = {
- if (newText.length() > maximumOutputSize) {
- val lastNL = newText.indexOf('\n', maximumOutputSize)
- val endPos = if (lastNL == -1) newText.length else lastNL
- val displayableText = newText.substring(0, endPos)
- val suffix = if (displayableText.last == '\n') "" else "\n"
- val lastLine = displayableText.lines.toSeq.last
- val offset = lastLine.substring(0, lastLine.indexOf(' ', 0))
- logger.debug("Cutting off large output at position: " + offset)
- displayableText + suffix + offset + " Output exceeds cutoff limit. "
- } else newText
+ def pruneOutputPerEvaluation(newText: String): String = {
+ // Extracts the offset prefixed to each line in the evaluation output
+ object Offset {
+ private val offsetExtractor = """^(\d*)""".r
+ def unapply(line: String): Option[String] = offsetExtractor.findFirstIn(line)
@dragos Owner
dragos added a note

RegEx objects already have unapplies. Couldn't you use the regex directly in patterns?

@dotta Owner
dotta added a note

Yeah, let's do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
...aide/worksheet/runtime/IncrementalDocumentMixer.scala
((7 lines not shown))
- val endPos = if (lastNL == -1) newText.length else lastNL
- val displayableText = newText.substring(0, endPos)
- val suffix = if (displayableText.last == '\n') "" else "\n"
- val lastLine = displayableText.lines.toSeq.last
- val offset = lastLine.substring(0, lastLine.indexOf(' ', 0))
- logger.debug("Cutting off large output at position: " + offset)
- displayableText + suffix + offset + " Output exceeds cutoff limit. "
- } else newText
+ def pruneOutputPerEvaluation(newText: String): String = {
+ // Extracts the offset prefixed to each line in the evaluation output
+ object Offset {
+ private val offsetExtractor = """^(\d*)""".r
+ def unapply(line: String): Option[String] = offsetExtractor.findFirstIn(line)
+ }
+ // Groups together `lines` that share the same offset
+ @tailrec def groupLinesByOffset(lines: List[String], acc: LinkedHashMap[String, ListBuffer[String]]): LinkedHashMap[String, ListBuffer[String]] = lines match {
@dragos Owner
dragos added a note

Could you write this simpler by using List.groupBy?

@dragos Owner
dragos added a note

I see now that groupBy uses the default map, so it doesn't keep insertion order. But I guess you could still do something like

lines map { line => line match {
    case Offset(off) => (off, line)
    case _ => ???
  }
}

and not re-implement looping logic.

@dotta Owner
dotta added a note

Makes sense. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@dragos
Owner

I don't think it's too difficult to add a test, it would be nice to have it.

@dotta dotta merged commit 29a93ab into scala-ide:master
@dragos dragos commented on the diff
...aide/worksheet/runtime/IncrementalDocumentMixer.scala
((7 lines not shown))
- val endPos = if (lastNL == -1) newText.length else lastNL
- val displayableText = newText.substring(0, endPos)
- val suffix = if (displayableText.last == '\n') "" else "\n"
- val lastLine = displayableText.lines.toSeq.last
- val offset = lastLine.substring(0, lastLine.indexOf(' ', 0))
- logger.debug("Cutting off large output at position: " + offset)
- displayableText + suffix + offset + " Output exceeds cutoff limit. "
- } else newText
+ def pruneOutputPerEvaluation(newText: String): String = {
+ // Groups together `lines` that share the same offset (lines order is maintained)
+ def groupLinesByOffset(lines: List[String]): LinkedHashMap[String, ListBuffer[String]] = {
+ val offsetExtractor = """^(\d*)""".r
+ val groupedEvaluations = LinkedHashMap.empty[String, ListBuffer[String]]
+
+ for(line <- lines) offsetExtractor.findFirstIn(line) match {
+ case Some(offset) =>
@dragos Owner
dragos added a note

You could even use this (that's what I meant saying that regex values are already extractors):

for (line <- lines) line match {
  case offsetExtractor(offset) =>
  case _ =>
}

(just nitpicking) :+1:

@dotta Owner
dotta added a note

Oooh, true!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Oct 16, 2012
  1. @dotta
This page is out of date. Refresh to see the latest.
View
30 org.scalaide.worksheet.tests/src/org/scalaide/worksheet/WorksheetEvalTest.scala
@@ -133,7 +133,10 @@ object testeval {
//| 5
//| 6
//| 7
- //| Output exceeds cutoff limit.
+ //| 8
+ //| 9
+ //| 10
+ //| Output exceeds cutoff limit.
}"""
withCutOffValue(50) { runTest("eval-test/test4.sc", initial, expected) }
@@ -166,6 +169,9 @@ object testeval {
//| 5
//| 6
//| 7
+ //| 8
+ //| 9
+ //| 10
//| Output exceeds cutoff limit.
}"""
import EvalTester._
@@ -226,6 +232,28 @@ object Main {
runTest("eval-test/symbolic.sc", initial, expected)
}
+ @Test
+ def cutOff_is_perEvaluation_t97() {
+ val initial = """
+object myws {
+ val even = for(i <- 1 until 1000) yield 2*i
+ val odd = for(i <- 1 until 1000) yield (2*i+1)
+}
+"""
+
+ val expected = """
+object myws {
+ val even = for(i <- 1 until 1000) yield 2*i //> even : scala.collection.immutable.IndexedSeq[Int] = Vector(2, 4, 6, 8, 10, 1
+ //| 2, 14, 16, 18, 2
+ //| Output exceeds cutoff limit.
+ val odd = for(i <- 1 until 1000) yield (2*i+1)//> odd : scala.collection.immutable.IndexedSeq[Int] = Vector(3, 5, 7, 9, 11, 1
+ //| 3, 15, 17, 19,
+ //| Output exceeds cutoff limit.
+}"""
+
+ withCutOffValue(100) { runTest("eval-test/test97.sc", initial, expected) }
+ }
+
/** Temporarily set the cut off value to `v`. */
private def withCutOffValue(v: Int)(block: => Unit) {
val prefs = WorksheetPlugin.plugin.getPreferenceStore()
View
59 org.scalaide.worksheet/src/org/scalaide/worksheet/runtime/IncrementalDocumentMixer.scala
@@ -2,28 +2,30 @@ package org.scalaide.worksheet.runtime
import java.io.Writer
-import scala.actors.{ Actor, DaemonActor, TIMEOUT }
+import scala.annotation.tailrec
+import scala.actors.{Actor, DaemonActor, TIMEOUT}
+import scala.collection.mutable.LinkedHashMap
+import scala.collection.mutable.ListBuffer
import scala.tools.eclipse.logging.HasLogger
import org.scalaide.worksheet.editor.DocumentHolder
import org.scalaide.worksheet.text.{ Mixer, SourceInserter }
object IncrementalDocumentMixer {
- def apply(editor: DocumentHolder, source: Writer, maxOutput: Int = MaximumOutputChars): Actor = {
+ def apply(editor: DocumentHolder, source: Writer, maxOutput: Int): Actor = {
val incrementalMixer = new IncrementalDocumentMixer(editor, source, maxOutput)
incrementalMixer.start()
incrementalMixer
}
- final val RefreshDocumentTimeout = 200
- final val MaximumOutputChars = 100
+ private final val RefreshDocumentTimeout = 200
/** How many ticks should pass with output not changing before adding a spinner? */
- final val InfiniteLoopTicks = 1000 / RefreshDocumentTimeout // 1 sec
+ private final val InfiniteLoopTicks = 1000 / RefreshDocumentTimeout // 1 sec
}
private class IncrementalDocumentMixer private (editor: DocumentHolder, source: Writer, val maximumOutputSize: Int) extends DaemonActor with HasLogger {
- import IncrementalDocumentMixer.{ RefreshDocumentTimeout, InfiniteLoopTicks }
+ import IncrementalDocumentMixer.{ RefreshDocumentTimeout, InfiniteLoopTicks}
private val originalContent = editor.getContents
private val stripped = SourceInserter.stripRight(editor.getContents.toCharArray)
@@ -45,7 +47,7 @@ private class IncrementalDocumentMixer private (editor: DocumentHolder, source:
private def updateDocument(newText: String): Unit = {
if (newText.length > 0) {
- val (mixed, lastInsertion) = mixer.mix(stripped, showLongRunning(pruneOutput(newText)).toCharArray())
+ val (mixed, lastInsertion) = mixer.mix(stripped, showLongRunning(pruneOutputPerEvaluation(newText)).toCharArray())
editor.replaceWith(mixed.mkString, lastInsertion)
}
}
@@ -73,17 +75,38 @@ private class IncrementalDocumentMixer private (editor: DocumentHolder, source:
}
}
- def pruneOutput(newText: String): String = {
- if (newText.length() > maximumOutputSize) {
- val lastNL = newText.indexOf('\n', maximumOutputSize)
- val endPos = if (lastNL == -1) newText.length else lastNL
- val displayableText = newText.substring(0, endPos)
- val suffix = if (displayableText.last == '\n') "" else "\n"
- val lastLine = displayableText.lines.toSeq.last
- val offset = lastLine.substring(0, lastLine.indexOf(' ', 0))
- logger.debug("Cutting off large output at position: " + offset)
- displayableText + suffix + offset + " Output exceeds cutoff limit. "
- } else newText
+ def pruneOutputPerEvaluation(newText: String): String = {
+ // Groups together `lines` that share the same offset (lines order is maintained)
+ def groupLinesByOffset(lines: List[String]): LinkedHashMap[String, ListBuffer[String]] = {
+ val offsetExtractor = """^(\d*)""".r
+ val groupedEvaluations = LinkedHashMap.empty[String, ListBuffer[String]]
+
+ for(line <- lines) offsetExtractor.findFirstIn(line) match {
+ case Some(offset) =>
@dragos Owner
dragos added a note

You could even use this (that's what I meant saying that regex values are already extractors):

for (line <- lines) line match {
  case offsetExtractor(offset) =>
  case _ =>
}

(just nitpicking) :+1:

@dotta Owner
dotta added a note

Oooh, true!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
+ groupedEvaluations.get(offset) match {
+ case None => groupedEvaluations += (offset -> ListBuffer(line))
+ case Some(result) => groupedEvaluations += ((offset, result += line))
+ }
+ case None =>
+ logger.error("Discarded evaluation result, this is a bug. Please open a ticket and make sure to provide the worksheet source that generated this warning.")
+ }
+
+ groupedEvaluations
+ }
+
+ val lines = newText.split('\n')
+ val linesByOffset = groupLinesByOffset(lines.toList)
+
+ val evaluationResults = for((offset, evaluationOutputs) <- linesByOffset) yield {
+ val evaluationResult = evaluationOutputs.mkString("\n")
+ (if(evaluationResult.length <= maximumOutputSize) evaluationResult
+ else {
+ logger.debug("Cutting off large output at position: " + offset)
+ evaluationResult.take(maximumOutputSize) + '\n' + offset + " Output exceeds cutoff limit."
+ })
+ }
+
+ evaluationResults.mkString("\n")
}
override def toString: String = "IncrementalDocumentMixer <actor>"
Something went wrong with that request. Please try again.