New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt at #SERVER-86 -- ConcurrentModificationException #64

Closed
wants to merge 1 commit into
base: develop
from

Conversation

Projects
None yet
2 participants
@ansyeow

ansyeow commented Dec 15, 2011

Hi Lennart & All,

This is my attempt at fixing #SERVER-86.

--I have combined methods getCount() and reset() into one method named getCountAndReset().
--The test is code very messy. But it appear to work.
--The GetAllHostsAndGetCountAndResetCallable submitted half-way through the increment loop was my
attempt at simulating a "get count" call while increments were being submitted by other threads.

I am really new at multi-threaded code. Hope to learn something here. :)

Andrew

@lennartkoopmann

This comment has been minimized.

Member

lennartkoopmann commented Dec 15, 2011

Thansk you very much, but there is a simpler way... :) http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ConcurrentHashMap.html

Fixed in: c185fd0

Thanks for contributing anyways :)

@ansyeow

This comment has been minimized.

ansyeow commented Dec 15, 2011

Hi Lennart,

I ran the test I wrote against the current fix (out of curiosity, since the test is already written), and I found the test failure output below:

Testcase: testTwoIncrementCallable(org.graylog2.database.HostCounterCacheTest): FAILED
cumulativeCounts: arrays first differed at element [1]; expected:<227> but was:<225>
junit.framework.AssertionFailedError: cumulativeCounts: arrays first differed at element [1]; expected:<227> but was:<225>
    at org.graylog2.database.HostCounterCacheTest.testTwoIncrementCallable(HostCounterCacheTest.java:64)


    at org.graylog2.database.HostCounterCacheTest.testTwoIncrementCallable(HostCounterCacheTest.java:64)
Test org.graylog2.database.HostCounterCacheTest FAILED

The same test passed consistently against the version I submitted.

I am not sure which part is making the difference (and which part is not :P). Items I can list are:
1--combining methods getCount & reset into one atomic method
2--use of putIfAbsent [from ConcurrentMap]
3--use of incrementAndGet [from AtomicInteger]
4--use of remove(K, V) [from ConcurrentMap]

One scenario I can think of that can potentially cause the failure, is related to item (1) above.
--Collected count [225] is less than generated count [227]
--Increments that arrived after getCount but beforereset are dropped.

I appreciate your review of my submit. I understand that this issue might not be as critical as some other issues (if looking into this issue takes up more time). I am already enjoying all the improvements going on in this project. :)

Andrew

joschi added a commit that referenced this pull request Feb 15, 2018

Make conversion functions more consistent (#64)
* there was a bug with to_string returning null instead of its default value (refs #63)
* all core conversion functions now return their "default empty" value if the value is `null`
  - String: ""
  - bool: false
  - double: 0d
  - long 0L
  - IP: V4 ANY (0.0.0.0)
 * adds test cases for all cases, including the edge cases

joschi added a commit that referenced this pull request Feb 15, 2018

joschi added a commit that referenced this pull request Feb 15, 2018

Fix FunctionsSnippetsTest after conversion function unification (#70)
`evalError()` can no longer trigger the error tested for and was removed.
`evalErrorSuppressed()` now tests an illegal default value in `to_ip()`.

Fixes #64

kroepke added a commit that referenced this pull request Feb 16, 2018

Merge Pipeline Processor plugin into Graylog core (#4590)
* Call `finishProcessing` in the right place

We should call `InterpreterListener.finishProcessing()` only once, but
before that was not necessarily true, as it was being called inside a
loop.

Instead, now we call it in the same method as `startProcessing()`, after
all processing has been finished.

Fixes #51

* Adapt to changed decorators interface (#43)

* Providing a message decorator that uses pipelines.
* Making decorator configurable.
* Allow adding new messages by pipeline decorator.
* Adding changes related due to introduced listener.
* Adapt to naming changes, using easier forEach idiom.
* Changing decorator to work on SearchResponse instead of message list.

* Improve error message on regex()

* Update evalError test rule

Since #32, the behaviour of `to_ip()` changed and now it doesn't raise
an exception in those circumstances. Now we use `regex()` to throw an
exception instead.

* Add sample Decorator preset. (#52)

* Providing a message decorator that uses pipelines.
* Making decorator configurable.
* Allow adding new messages by pipeline decorator.
* Adding changes related due to introduced listener.
* Adapt to naming changes, using easier forEach idiom.
* Changing decorator to work on SearchResponse instead of message list.
* Adding decoration stats for pipeline processor decorator.
* Add uppercase decorator using pipelines interpreter with preset.
* Decorators don't need to generate decoration stats on their own anymore.

* Bumping versions to 2.1.0-beta.1 / 1.1.0-beta.1

* [graylog-plugin-pipeline-processor-release] prepare release 1.1.0-beta.1

* [graylog-plugin-pipeline-processor-release] prepare for next development iteration

* Bumping graylog dependency version to next development iteration.

* Making pipeline decorator more robust if removed pipelines are ref'd.

* Fix permissions on system navigation route

* Update npm dependencies

Refs #2544

* Fix cancel button on new pipeline form

Fixes #57

* Update Travis CI configuration

Disable building the web-part of this project because it would require a
full checkout of the Graylog web interface.

* Add missing license headers

* Move -Dskip.web.build=true into install/script commands (Travis CI)

* Fix Travis CI badge in README.md

[ci skip]

* Run on Trusty build environment (Travis CI)

* Fix issues with app prefix (#66)

* Use core Routes instead of literals
  This allows us to prefix routes if needed.
* Make plugin aware of __webpack_public_path__ setting

Refs #2564

* Support "only named captures" for pipeline grok function (#65)

The server cache is necessary because the named captures support needs a separately compiled regex.
So far the cache is only used by the grok function in the pipeline processor

Closes #59

* Make conversion functions more consistent (#64)

* there was a bug with to_string returning null instead of its default value (refs #63)
* all core conversion functions now return their "default empty" value if the value is `null`
  - String: ""
  - bool: false
  - double: 0d
  - long 0L
  - IP: V4 ANY (0.0.0.0)
 * adds test cases for all cases, including the edge cases

* Add type hint for Travis CI's Java compiler to be happy

* Update Maven plugins

* Sync dependencies with Graylog 2.1.0-beta.2-SNAPSHOT

* Disable failing tests in FunctionsSnippetsTest

Refs #64

* Fix FunctionsSnippetsTest after conversion function unification (#70)

`evalError()` can no longer trigger the error tested for and was removed.
`evalErrorSuppressed()` now tests an illegal default value in `to_ip()`.

Fixes #64

* Remove wildcard type cast to make IDEA 2016.2 happy (#71)

This change should not affect `javac` at all, but intellij flags the collect call with having two errors.

* Bumping versions to 2.1.0-beta.2 / 1.1.0-beta.2

* [graylog-plugin-pipeline-processor-release] prepare release 1.1.0-beta.2

* [graylog-plugin-pipeline-processor-release] prepare for next development iteration

* Bumping graylog dependency version to next development iteration.

* Remove upper case decorator (#73)

There is no real use case for this besides being a good test case for
the decorator system development.

Fixes #2588

* do not preprocess arguments of the error function

fixes #25

* unwrap JsonNode values (#72)

e.g. strings would be double quoted without this

fixes #68

* add optional prefix/suffix to set_fields functions (#75)

fixes #74

* Add key-value parsing function (#77)

Fixes #38

* Allow selection of an input ID for the simulation message (#78)

* Invert equals() call to avoid possible null pointer exception

Fixes #2610

* Allow selection of an input ID for the simulation message

Pipeline rules may use the `from_input` function in a condition so this
is needed to make the simulation work.

Refs #2610

* Bumping versions to 2.1.0-beta.3 / 1.1.0-beta.3

* [graylog-plugin-pipeline-processor-release] prepare release 1.1.0-beta.3

* [graylog-plugin-pipeline-processor-release] prepare for next development iteration

* Bump server dependency to 2.1.0-beta.4-SNAPSHOT

* Unregister PipelineInterpreter from event bus (#79)

Message decorators and the pipeline simulator create new instances of
PipelineInterpreters that never get garbage collected, as they are still
registered in the event bus.

These changes add a simple workaround for that. We should probably
refactor the lifecycle of the PipelineInterpreter, but this is probably
not the best time to do it.

* Pipeline UI improvements (#83)

* Make changes summary the default view on simulator
* Improve message when there are no pipeline connections
  Include links to rules and pipelines if there are no pipelines
  available.
* Display pipelines using a certain rule
  On the edit rule page, show pipelines that are using that rule,
  including a link to them.
* Fix add new pipeline button position
* Improve navigation options
  Change some of the options in the top right navigation to adapt better
  to the workflow.
* Disable message actions in simulator
  This is a left over of when we used real messages to test the pipelines.

Fixes #2683

* Dynamic function list (#89)

* add resource to access entire function registry

* add descriptions to Functions and Parameters

add basic UI for displaying function descriptors
not searchable yet

* descriptions for conversion functions

* descriptions for date functions

* descriptions for hash functions

* description for cidrmatch

* description for json functions

* descriptions for message functions

* descriptions for null/notnull functions

* descriptions for from_input

* descriptions for string functions

* tweak table widths #lolcss

* Use find in the regex function (#88)

Do not force regular expressions passed to `regex()` to match the whole
string.

Fixes #35

* Bumping versions to 2.1.0-beta.4 / 1.1.0-beta.4

* [graylog-plugin-pipeline-processor-release] prepare release 1.1.0-beta.4

* [graylog-plugin-pipeline-processor-release] prepare for next development iteration

* Bumping graylog dependency version to next development iteration.

* Support DateTime comparison (#92)

Add support for comparison operators on DateTime objects.

Fixes #86

* Make some small UI changes around RuleHelper (#90)

- Update text descriptions
- Hide page selector input

* use shared classloader so other plugins can contribute functions (#94)

fixes #81

* Fix loading issue by removing <includes> from resource

* Rename graylog2.{plugin-dir,version} to graylog.*

* Add missing Graylog-Plugin-Properties-Path manifest entry

* Integrate audit log (#96)

* Integrate audit log
* Replace edit with update action on pipeline connections
* Add license

* add parse error handler for precompute args failures (#93)

* add parse error handler for precompute args failures

allow lowercase timezone ids

* make error message look nicer

* don't hit undo before commit…

* Fix license check

* Bumping versions to 2.1.0-rc.1 / 1.1.0-rc.1

* [graylog-plugin-pipeline-processor-release] prepare release 1.1.0-rc.1

* [graylog-plugin-pipeline-processor-release] prepare for next development iteration

* Bumping graylog dependency version to next development iteration.

* Fix page size in function list (#97)

`slice()` doesn't include the end element in the array that returns, so
we were rendering 9 functions instead of 10.

* Add '@test' annotation to keyValue test

Refs #77

* Correct trim value argument description

* Update version to 1.2.0-SNAPSHOT

* use case insensitive lookup for timezone IDs (#102)

* use case insensitive lookup for timezone IDs

simply upper-casing timezone IDs failed for strings like 'Europe/Moscow'. Unfortunately the forID function is case sensitive.

fixes #100

* override the millis provider to stabilize test

* remove unused statement and fix import

* Display boolean values in pipeline simulator (#99)

Convert field values into strings for displaying.

Fixes #54
(cherry picked from commit 18a31177aedb47a14fd6db7c7400e6e9d5a3c1c0)

* Simplify pipeline processor UI (#106)

Fixes #104 

* Use pipelines overview page as entry point

* Add connected streams to pipeline overview page

* Move pipeline connection lists to its own component

* Add API method to connect streams to a pipeline

Migrating pipeline connections to the pipeline page also means that
we need to create connections in the reverse order: several streams
connected to a pipeline. I added a new API method for it, as I think the
old method can still be useful for someone using the API
programatically.

* Format PipelineConnectionsActions

* Set connections from pipeline details view

* Use nicer word break in timeline

* Cleanup after moving connections to pipeline view

* Add link to manage rules to rule details page

* Adapt pipeline simulator to recent changes

Now the stream selection needs to be done in the simulate page, as we
don't know before which stream to use.

* Implement arithmetic operators (#111)

Refs #16

* Create benchmark harness for file-based rules/pipelines (#107)

* initial wip version of a parameterized jmh runner for rules and pipelines

* implement in-memory rule service

to be able to use guice in the benchmarks, so we can reuse the production bindings, we need in-memory variants of the services we have in order to avoid benchmarking database access
the mongodb backed services have been renamed and are bound by default

* [wip] adding in-memory rule service

use in-memory grok service from #2914
add some license headers
some in-memory services are still missing and the pom packaging needs work

* add in-memory pipeline and stream connections service

dummy stream service implementation because upstream will change this anyway in the near future
use in-memory module instead of anonymous module

* make in-memory services singletons so we can load data into them

actually read benchmarks from disk and populate the stores etc
see `match_all_rule` `benchmark.toml` for an example of how to set up a benchmark

* split pipeline plugin into parent, benchmarks and actual plugin

lots of refactoring needed because the benchmarks need to be build as a separate module due to jmh requirements
explicit mention of jmh annotation processing necessary because something overrides the setting, making classpath autodetection barf
this requires an updated manifest to work with graylog-project, will push separately

slight tweaks to the benchmark class

* support no-fork mode (-Dprofile) and running a single benchark (-Dbenchmark.name) for debugging/profiling

add yourkit profiling commands as comments (useful to enable when digging into why a benchmark is slow) (manually include the yourkit -redist jar to classpath to use)
add another multi-stage benchmark as baseline for stage performance

* make build work again

another web-parent artifact: we need to disable the globally enabled frontend plugin

* add benchmark

* move misplaced files due to rebasing madness

* move eslint file

* remove old files left by improper rebase

* Use webplugin babel6 (#112)

* update to web-plugin and babel6, requires graylog2-server branch update-babel-packages-and-config

* "unskip" web plugin

the parent has to skip the execution of the frontend maven plugin, but the actual plugin needs to enable it

* Adjust to new parent structure

* bump to version 2.2.0-alpha.1

* bump parent version to 2.2.0-alpha.1

* add distributionManagement

* fix versions

* Correcting submodule versions.

* Making graylog dependency version dynamic.

* Binding version to project parent version.

* Revert "Binding version to project parent version."

This reverts commit a977672.

* Revert "Making graylog dependency version dynamic."

This reverts commit b80529d.

* Bumping graylog2-server version, adding dependency mgmt clause.

* [maven-release-plugin] prepare release 2.2.0-alpha.1

* [maven-release-plugin] prepare for next development iteration

* bump to snapshot server version

* update benchmark pom after alpha.1

* failing test for bug #2952

* Revert "failing test for bug #2952"

This reverts commit fcdf628.

* Plugin infrastructure changes

- Add `<groupId/>` to make versions plugin work correctly
- Enable maven deploy
- Add `<repositories/>` to make building on travis work
- Finish switch to `graylog-plugin.properties`
- Use dependency versions from parent properties if possible
- Use plugin versions from parent if possible

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-alpha.2

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-alpha.3

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Fix build by enabling the frontend-maven-plugin again

* create tarball for benchmark package, to avoid having to peel out benchmark files from the jar. also makes it easier to run new benchmarks

* don't try to use 512 gb..., turn off verbose flag for git maven plugin

* Do not use  when building with webpack

* cleanup pom, bump to auto value snapshot for memoized extension

* include git describe in benchmark artifact

* tweak benchmark invocation

* fix graylog.version property

* don't look up pipelines twice during interpreter run

* memoize Pipeline#hashCode

* reuse meters in pipelines/stages/rules

 this prevents spending unnecessary time in recreating the names for the metrics, which saves a considerable amount of time

* rewrite StageIterator to do less on-the-fly work

when the iterator is created the entire set of stages is known, don't try to be clever and pre-compute the set instead of having to do computation to solve it "elegantly"

* write CSV result files

* replace TreeSet usage by ImmutableSortedSet, which is just a list underneath

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-alpha.4

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* lazily initialize rarely used EvaluationContext properties

we can save some allocations by defering allocation the maps and lists

* bump auto-value to 1.4-rc1

* remove superfluous empty constructor

* Messagelist benchmark (#119)

Load test messages from csv files or generate 25000 static ones if the benchmark requests that, otherwise fall back on a single message.

* extract pipeline state updater from interpreter

by separating the state computation and updating from the interpreter we can control what state we want to use for interpretation
this allows a cleaner simulation without interfering with current processing

it also makes the code a little easier to follow by separating the two different concerns

* Split up process* methods for better visibility in profiler/code clarity (#121)

* don't doubly negate the value of the expression (#125)

fixes #124

* fix wording to be consistent: metrics -> throughput

* Use explicit default stream (#123)

* Display error message when system has no streams

Pipeline simulation currently requires a stream, but we didn't handle
the case where a system had no streams.

These changes will display a message indicating the situation, and link
to the streams page, where a new stream can be created.

* Use explicit default stream in REST and UI

Remove special handling of default stream in REST resources and web
interface.

* Use explicit default stream in processor

* Add method to delete pipeline connections

* Add migration for legacy default stream

Move stream connections to the legacy default stream to the new,
explicit default stream.

* Use explicit default stream in interpreter tests

As the default stream behaves like another stream, we need to explicitly
add the stream to messages routed to it.

* wip codegen

* first working version of rule code generation for when conditions

function invocation works (pipeline function authors API stayed the same)
arguments are pre-transformed (but constant expressions are not hoisted yet)

* create code for then method

implement Map and Array/List literals
assert that then method is properly executed

* only resolve functions once

* prioritize function call over var reference

* track the referenced expression in varref and fieldref for constant checking

* ignore varref arguments when precomputing constant args

during direct tree interpretation the variables are not yet defined, so we need to skip them
code generation doesn't care and won't ever take this code path

* hoist constant expressions/statements into rule constructor

this includes variable assignements, literal creation as well as function arguments
no attempt is made to clean the variables yet, which may be necessary between runs
hoisted constant literals are not enforced to be constant yet, however the generated code also never touches them again, so it seems unnecessary to wrap them in Unmodifiable* objects

* use correct rule reference, otherwise we are simply measuring a false rule

* bump to use latest server version

* attach generated rule code and execute it if present

* cache stage iterator content for each different pipeline set

setting up the stage iterator is by far the most expensive part of interpreting overhead

* Revert "cache stage iterator content for each different pipeline set"

This reverts commit 49116699c94dde1eeba9e63419cd8543942df3ff.

* moved stage iterator config caching into interpreter state

configuration option to toggle caching (mostly for benchmarking)

* add arithmetics and indexed access to code generator

run all rule-based parser tests with and without code generation
refactor test functions to work around square/javapoet#526

* when resolving a rule, make an invokable copy with a fresh instance of the generated code if present

this avoid having to make the generated code threadsafe, which would severely impact runtime performance

* add configuration options to turn off stage caching and code generation

refactor classloader passing, so that each state reload reuses a single classloader

* remove large message file

* fix benchmark

* only show generated code on TRACE (#133)

* Turn off code generation if no compiler available (#134)

* detect whether the environment has a compiler available and complain if we don't

this typically happens in a JRE, which has the ToolProvider, but not the tools.jar.
we ask to run on a JDK instead

fail more gracefully on compilation errors

fixes #132

* only warn when code generation was requested so the user can silence the warning

* Date arithmetic (#136)

* initial date arithmetic support

#91

* code generation and interpreter date arithmetic

plus tests

* remove obsolete comment about operator selection

* prevent adding two dates at parse time

attempting to add (not subtract) two dates raises an InvalidOperation parse error because there aren't any sane semantics for doing so
while subtraction gives the duration between the dates, there's hardly any good reason to add them

* add individual function tests

add test for subtracting 10000 years

* fix benchmark due to signature change in core

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Fix Javadoc errors about self-closing element <br/>

* fix NPEs when code generation is off and a function is unresolved (#141)

there were two possible NPEs during parse checking and when compiling rules with unresolved functions

* replace openhft compiler wrapper non-caching version (#137)

* replace openhft compiler wrapper with simpler version that does not cache

the caching compiler wrapper prevented class unloading, which we depend on
unscientific test that checks for class unloading is added

migrate all call sites

* Update ConfigurationStateUpdaterTest.java

* add to_date() function for easier interaction with $message.timestamp (#140)

this function makes the following statement possible:

    set_field("timestamp", to_date($message.timestamp) + hours(1));

* do not register cache gauges more than once (#139)

* do not register cache gauges more than once

invalid re-registering of cache gauges caused pipeline state updates to fail the second time
refactor the state updating to not use event bus, but instead let the interpreter pull the state directly from the instance it has
decorators and simulator push the state into their interpreter instance, making the interpreter immutable itself.

* Update ConfigurationStateUpdater.java

remove obsolete publishing of state on the event bus

users of the state now poll the state

* don't remove the initial state reload

* Implement split() function (#143)

Fixes #98

* Attach Javadoc and Sources artifacts and sign artifacts in "release" profile (#144)

Fixes #142

* Disable native code generation until compiler interface is fixed (#145)

this change disables the tests for codegen as well as the config option to be a no-op

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.2

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Add page titles (#148)

Refs #2834

* Remove experimental flag from pages 🎉 (#149)

* Implement loadAllWithIndexSet method (#150)

Refs #3207

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.3

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Do not use lambdas with gauge metrics (#152)

Instead use CacheStatsSet to expose Guava cache stats to metric registry

Fixes #146

* Add clone_message() function (#153)

* Add clone_message() function

Closes #138

* Add $message parameter to clone_message()

* Track total pipeline interpreter executionTime as a single metric (#155)

Fixes #3124

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.4

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Allow duplicate stream titles in route_to_stream (#154)

Cache stream ids and titles to avoid heavy database traffic during function evaluation

Fixes #101

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.5

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-beta.6

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* [graylog-plugin-pipeline-processor] prepare release 2.2.0-rc.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Shrinkwrap JS dependencies for Graylog 2.2.0 (#157)

More information in: https://docs.npmjs.com/cli/shrinkwrap

* [graylog-plugin-pipeline-processor] prepare release 2.2.0

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Remove npm shrinkwrap

* Bump version to 2.3.0-SNAPSHOT (#161)

* Update to react-bootstrap 0.30 (#164)

* Change onSelect handler signature

* Update gitignore

Exclude babel cache

* Replace deprecated react-bootstrap Inputs

Use the new API when only using Input as a wrapper, otherwise using the
adaptor Input component.

* Add id to Tab to improve accessibility

* Use uppercase timezone in TimezoneAwareFunction and fix default value (#169)

Fixes #168

* Lookup tables (#177)

* add generic lookup table support

* Use the new LookupResult return type in the lookup function (#176)

* [graylog-plugin-pipeline-processor] prepare release 2.3.0-alpha.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Adjust lookup function to single value change in server (#178)

Also add a lookup_value function.

* Adjust to upcoming changes in LookupResult (#179)

* [graylog-plugin-pipeline-processor] prepare release 2.3.0-alpha.2

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Spelling: safety (#181)

[ci skip]

* Add support for custom locale in parse_date() (#184)

Closes #183

* [graylog-plugin-pipeline-processor] prepare release 2.3.0-alpha.3

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Smaller UI and UX changes (#186)

* Unify navigation across all pages

The navigation that was changing on pages was immensely confusing me. Like already done for other parts of the web interface that are following a similar pattern, this change addresses this by always having the same navigation items and highlighting the currently active one.

* Remove now duplicate button to simulator

This helps a little with the large amount of buttons on the page.

* Fix typo

* Warn when a pipeline isn't connected to streams

Show a prominent warning on the pipeline details page in case the pipeline is not connected to any streams.

* Improve "No stream connection" warning

Include a note that you do not have to connect a pipeline to a stream if you intend to use it for decorators only. Also make clear that pipelines that are connected to a stream will process incoming messages.

* New function: debug() (#188)

* New function: debug()

Added a new function that will log the string represntation of any value that has been passed to it. I just found this immensely helpful when working with a lookup table result.

(cherry picked from commit fbe8cfb81e8c90f4b299c54cae8594642a57bec0)

* add missing license header

* Allow snake-case access to bean objects (#189)

If the given field name does not work, the code now tries to convert the
field name to a camel case value.

Fixes Graylog2/graylog-plugin-map-widget#47

* Improve lookup functions (#191)

- Enable default parameter
- Add a description for function parameters
- Return null instead of an optional if the default is not set

* [graylog-plugin-pipeline-processor] prepare release 2.3.0-beta.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Prevent ClassCastException with invalid timestamp in clone_message() (#192)

* Prevent ClassCastException with invalid timestamp in clone_message()

If the "timestamp" field of a message is invalid, `clone_message()` will fail because
`Message#getTimestamp()` tries to cast this field to a `DateTime` object.

With the changes in this commit, the type of the "timestamp" field will be checked
before accessing it and if it is invalid (i. e. not a `DateTime` object), the current
date and time will be used and the original content will be stored in the "_original_timestamp"
field of the new message.

Fixes #3880

* Convert original (invalid) timestamp to string

* Log warning if message has invalid timestamp

* Rename field to "gl2_original_timestamp"

* Fix handling of "match all"/"match either" (#193)

The pipeline interpreter had a bug regarding the handling of the "match all" and "match either"
statements which caused pipelines containing stages with "match all" to continue processing even
if not all rules in the stage were executed.

Fixes #3924

* Fix serialization/deserialization of StageSource (#195)

* Fix serialization/deserialization of StageSource

Some important Jackson annotations were missing from the StageSource class
which prevented proper deserialization via the Graylog REST API.

Fixes #194

* Add missing "stages" attribute to PipelineSource JsonCreator

* Use friendlier error message in case of invalid expressions (#196)

Closes #185

* [graylog-plugin-pipeline-processor] prepare release 2.3.0-rc.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Bump version to 2.4.0-SNAPSHOT

* Make "locale" parameter of "parse_date()" optional

The "locale" parameter of the "parse_date()" function processed as optional already,
but wasn't marked as such in the function signature.

Refs #202

* Add various Base encoding functions (#190)

* Sort alphabetically by title in MongoDbRuleService#loadAll() (#208)

The previous implementation returned an intransparent sorting of rules
to the user. Users expect a stable sorting of rules, so we'll sort them
alphabetically by title to get a stable sorting.

* parse_json() returns MissingNode if input wasn't valid JSON (#210)

The `parse_json()` function is supposed to return a `JsonNode` but returned
`null` if the input wasn't valid JSON.

This change set changes the return type to `MissingNode` if the input wasn't
valid and couldn't be parsed.

Closes #209

* Convert to use separate prop-types package

Closes #205

* [graylog-plugin-pipeline-processor] prepare release 2.4.0-alpha.1

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Fix plugin author formatting

* [graylog-plugin-pipeline-processor] prepare release 2.4.0-alpha.2

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Use Auto Value version from Graylog Plugin Parent

* [graylog-plugin-pipeline-processor] prepare release 2.4.0-alpha.3

* [graylog-plugin-pipeline-processor] prepare for next development iteration

* Bump version to 3.0.0-SNAPSHOT

* Prevent NPE in FunctionArgs#getConstantArgs() if Expression is null (#212)

Closes #211

* Set active state in the right prop (#214)

* Adapt to changes in Select (#213)

- Replace deprecated `onValueChange` prop
- Add `required` prop, now supported

* Specify type of 'value' (single or multi) for lookup functions (#217)

* Link to pipeline processors docs instead of Graylog website (#218)

Refs Graylog2/documentation#377

* Fix numeric conversions with to_double()/to_long() (#219)

* Fix numeric conversions with to_double()/to_long()

The functions for numeric conversions, `to_long()` and `to_double()`, didn't properly
support converting from strings or other numeric types.

Refs https://community.graylog.org/t/graylog-pipeline-problem/2810

* Add test cases for numeric corner cases (min/max/infinity)

* Sort function list on rules edit page alphabetically by name (#222)

This makes it much easier to find anything on that page.

* Register accidentally forgotten Base64Encode and Base64Decode functions (#223)

Refs #190

* Add functions to remove messages from streams (#220)

* Add `remove_from_default` boolean option to `route_to_stream()` function

This allows remove messages from the default stream like the stream router already does

* Add `remove_from_stream()` function

This function removes the function from the given streams (via name or id)
if the message would be taken off all streams, it is routed back to the default stream to avoid dropping the message accidentally

* Add missing Base64Decode and Base64Encode imports

* React router update (#225)

* Fix history prop deprecation

Use global history instead.

* Replace deprecated history pushState with push

* Add missing Input IDs (#226)

* Filter null values when iterating over `select_jsonpath` results (#233)

Fixes #232

* Source code editor (#234)

* Use SourceCodeEditor component

* Use annotations prop to set validations

* Add "parse_unix_milliseconds" functions

The `parse_unix_milliseconds()` function enables users to parse a UNIX epoch timestamp
in milliseconds into a date object.

* Remove Netty 3 dependency in CidrMatch (#239)

Replace `org.jboss.netty.handler.ipfilter.CIDR` with `org.graylog2.utilities.IpSubnet`
to get rid of the Netty 3 dependency in the `cidr_match()` function.

Refs #4226

* Handle PipelineInterpreter.State event to avoid warning in the log (#241)

Fixes #236

* Stabilize date/time based tests

The message timestamp used to be in the system default locale instead of UTC.

Additionally, this commit adds tests/examples for accessing individual components of the message timestamp in a pipeline rule.

* Move `parse_unix_milliseconds()` tests into separate test case

* Add `starts_with` and `ends_with` string functions (#227)

* Add comparison functions for all supported types (#237)

* Add comparison functions for all supported types

* Ensure actions were triggered in FunctionsSnippetsTest#comparisons()

* Add support for JsonNode to ´set_fields()` (#228)

* Add support for JsonNode to ´set_fields()`

Sometimes users might want to parse and merge the JSON payload of a message
with the Graylog message without knowing the complete structure of the payload
or without having a fixed structure which could be selectively merged by using
the `json_path()` method.

This commit essentially adds the possiblity to create a pipeline rule emulating
the existing JSON extractor:

    rule "json"
    when
      // some condition
    then
      let json = parse_json(to_string($message.some_field));
      set_fields(json);
    end

* Introduce `to_map` function to convert JsonNode to Map

* Fix description of `to_map()` function

* Merge Pipeline Processor plugin into Graylog core

Closes Graylog2/graylog-plugin-pipeline-processor#216

* Migrate URL class to HttpUrl for more sanity

...and less Forbidden APIs warnings

* Fix or suppress Error Prone and Forbidden APIs compile errors

* Add PipelineProcessorAuditEventTypes to AuditCoverageTest

* Move loading PipelineConfig into Server command

Otherwise the named "cached_stageiterators" and "generate_native_code" settings cannot be found.

While this *should* work with the `PluginModule` interface and the `PluginModule#getConfigBeans()`
method, fixing this would require some refactorings which I'd like to defer to a later point in time

* Fix React routes

* Remove benchmarks and unused files
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment