New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 0.10 #1459
Closed
Closed
Release 0.10 #1459
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…irection if set in the configuration This closes #1281
Before, this was not set, leading to incorrect results if KV-State was used in the WindowFunction. This also adds a test.
…failed jobs in new web dashboard This closes #1287
…er appended This closes #1279
Before it would only trigger if expectedTime < time. Now it is expectedTime <= time.
Before, the StreamRecords was not copied, now it is.
This fixes the aggregators to make copies of the objects so that it works with window operators that are not mutable-object safe. This also disables object copy in WindowOperator and NonKeyedWindowOperator.
…k-runtime tests Rename Match*Test to Join*Test and MapTaskTest to FlatMapTaskTest This closes #1294
Rename config key prefix from 'ha.zookeeper' to 'recovery.zookeeper' Rename config key from 'state.backend.fs.dir.recovery' => 'state.backend.fs.recoverydir' Move ZooKeeper file system state backend configuration keys This closes #1286
Operators defer object creation when object reuse is disabled. This closes #1288
Before these trigger methods had no information about the window that they are responsible for. This information might be required for implementing more advanced trigger behaviour.
This enhances the TriggerResult enum with methods isFire() and isPurge() that simplify the logic in WindowOperator.processTriggerResult(). Also, the operator now keeps track of the current watermark and fires immediately if a trigger registers an event-time callback for a timestamp that lies in the past. For this the TriggerResult now as method merge() that allows to merge to TriggerResultS.
…handle This closes #1282
…ation of fast-path windows.
The HandlerRedirectUtils.getRedirectAddress decides whether the retrieved leader address is equal to the local job manager address. The local job manager address is, however, in the form akka.tcp://flink@url/user/jobmanager whereas the leader address can be akka://flink/user/jobmanager if the local job manager is the current leader. Such a case produced a warning which is not correct. This PR checks for the local job manager address and signals that no redirection has to be done if it receives akka://flink/user/jobmanager. Add test for HandlerRedirectUtils This closes #1280.
… flink-conf of binary distribution
Otherwise the javadoc fails to generate..
….log and taskmanager.stdout This closes #1307
…ning jobs by start time This closes #1296
…nstead of http This closes #1309
The Kryo serializer uses Kryo's Output class to buffer individual write operations before it is written to the underlying output stream. This Output class is flushed by Flink's KryoSerializer upon finishing its serialize call. However, in case of an exception when flushing the Output, the buffered data is kept in the buffer. Since Flink uses EOFExceptions to mark that an underlying buffer is full and has to be spilled, for example, it can happen that the record triggering the spilling is written twice after it is rewritten. The reason is that Kryo's Output buffer still contains the serialization data of the failed attempt which is also flushed to the emptied output stream. This duplication of records can lead to corrupted data which eventually let's the Flink program crash. The problem is solved by clearing Kryo's Output when the flush operation was not successful. This closes #1308
This fixes a problem that would occur if a Trigger registers a new processing-time trigger in the onProcessingTime method. The problem is that onProcessingTime() is called while traversing the set of active triggers. If onProcessingTime() tries to register a new processing-time trigger this leads to a concurrent modification exception.
This commit is also changing how we build the "flink-shaded-hadoop" artifact. In the past, we were including all Hadoop dependencies into a fat jar, without relocating all of them. Maven was not able to see Hadoop's dependencies and classes ended up in the classpath multiple times. With this change, only shaded Hadoop dependencies are included into the jar. The shade plugin will also remove only the shaded dependencies from the pom file.
Streaming sources were directly assigned their InputFormat in the StreamingJobGraphGenerator. As a consequence, the input formats were directly serialized/deserialized by Akka when the JobGraph was sent to the JobManager. In cases where the user provided a custom input format or an input format with custom types, this could lead to a ClassDefNotFoundException, because the system class loader instead of the user code class loader is used by Akka for the deserialization. The problem was fixed by wrapping the InputFormat into a UserCodeObjectWrapper which is shipped ot the JobManager via the JobVertex's configuration. By instantiating stream sources as InputFormatVertices, the corresponding InputFormat is retrieved from the Configuration in the initializeOnMaster method call. This closes #1368.
When a candidate for a bulk iteration is instantiated, then the optimizer creates candidates for the step function. It is then checked that there exists a candidate solution for the step function whose properties met the properties of the input to the bulk iteration. Sometimes it is necessary to add a no-op plan node to the end of the step function to generate the correct properties. These new candidates have to be added to the final set of the accepted candidates. This commit adds that these new candidates are properly added to the set of accepted candidates. Fix test and add new iteration tests Add predecessor operator and dynamic path information to no op operator in bulk iterations This closes #1388.
…nment.readSequenceFile() This closes #1389.
…sm in local execution
This closes #1399
…exists This closes #1411
Before, it would not allow unioning with predecessors (also transitive) and streams of differing parallelism.
New users sometimes struggle with the imports (especially for Scala API).
Added Time.of(Time), because there is no TumblingTimeWindows.of(int, TimeUnit).
…og files cannot be accessed.
- This solves errors with reflectasm using Scala 2.11 and Java 8
Hi @guochuanzhi, it looks like your PR does not include any commits of you but just already committed changes. It does also not have a description. Can you please close this PR? If you would like to contribute to Flink, please have a look at the contribution guidelines. |
I'll close this PR. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
flink test