Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hotfix: issue 150 #151

Merged
merged 8 commits into from
Feb 27, 2019
Merged

Hotfix: issue 150 #151

merged 8 commits into from
Feb 27, 2019

Conversation

zouzias
Copy link
Owner

@zouzias zouzias commented Feb 25, 2019

Add unit test that replicates #150

tuleism and others added 5 commits February 25, 2019 14:23
* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param
@codecov
Copy link

codecov bot commented Feb 25, 2019

Codecov Report

Merging #151 into develop will increase coverage by 0.02%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           develop    #151      +/-   ##
==========================================
+ Coverage    77.28%   77.3%   +0.02%     
==========================================
  Files           33      33              
  Lines          995     996       +1     
  Branches        88      88              
==========================================
+ Hits           769     770       +1     
  Misses         226     226
Impacted Files Coverage Δ
.../scala/org/zouzias/spark/lucenerdd/LuceneRDD.scala 87.61% <100%> (+0.11%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cd03a31...6c2989a. Read the comment docs.

@zouzias zouzias merged commit 1e03459 into develop Feb 27, 2019
@zouzias zouzias deleted the hotfix/issue_150 branch February 27, 2019 23:12
zouzias added a commit that referenced this pull request Mar 11, 2019
* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns
zouzias added a commit that referenced this pull request Mar 11, 2019
* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* Revert "Revert "Setting version to 0.3.5-SNAPSHOT""

This reverts commit a6da0af.

* [build] update Lucene to 7.7.0

* Hotfix: issue 150 (#151)

* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns

* [sbt] version updates

* [scripts] fix shell

* Block linkage: allow a block linker with Row to  Query (#154)

* [linkage] block linker with => Query

* [linkage] block linker is Row => Query

* remove Query analyzer on methods

* [sbt] set version to 0.3.6-SNAPSHOT
zouzias added a commit that referenced this pull request Apr 26, 2019
* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* Revert "Revert "Setting version to 0.3.5-SNAPSHOT""

This reverts commit a6da0af.

* [build] update Lucene to 7.7.0

* Hotfix: issue 150 (#151)

* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns

* [sbt] version updates

* [scripts] fix shell

* Block linkage: allow a block linker with Row to  Query (#154)

* [linkage] block linker with => Query

* [linkage] block linker is Row => Query

* remove Query analyzer on methods

* [sbt] set version to 0.3.6-SNAPSHOT

* Feature: allow custom analyzers during compile time (#160)

* [analyzers] custom analyzer

* test return null

* [travis] travis_wait 1 min

* Revert "[travis] travis_wait 1 min"

This reverts commit c79456e.

* use lucene examples

* custom analyzer return null

* fix java reflection

* add docs

* Update to Lucene 8 (#161)

* [lucene] upgrade to version 8.0.0

* [lucene] remove ngram analyzer

* delete ngram analyzer

* minor fix

* add scaladoc

* LuceneRDDResponseSpec.collect() should work when no results are found - Issue #166 (#168)

* [sbt] update scalatest 3.0.7

* [sbt] update spark 2.4.1

* [build.sbt] add credentials file

* [plugins] update versions

* [sbt] update to 0.13.18

* Allow Lucene Analyzers per field (#164)

* [issue_163] per field analysis

* [sbt] update scalatest to 3.0.7

* [issue_163] fix docs; order of arguments

* fixes on ShapeLuceneRDD

* [issue_163] fix test

* issue_163: minor fix

* introduce LuceneRDDParams case class

* fix apply in LuceneRDDParams

* [issue_163] remove duplicate apply defn

* add extra LuceneRDD.apply

* [issue_165] throw runtime exception; use traversable trait (#170)

[issue_165] throw runtime exception; handle multi-valued fields in DataFrames

* [config] refactor; add environment variables in config (#173)

* [refactor] configuration loading

* [travis] code hygiene

* Make LuceneRDDResponse to extend RDD[Row] (#175)

* WIP

* fix tests

* remove SparkDoc class

* make test compile

* use GenericRowWithSchema

* tests: getDouble score

* score is a float

* fix casting issue with Seq[String]

* tests: LuceneDocToSparkRowpec

* tests: LuceneDocToSparkRowpec

* more tests

* LuceneDocToSparkRowpec: more tests

* LuceneDocToSparkRowpec: fix tests

* LuceneDocToSparkRow: fix Number type inference

* LuceneDocToSparkRowpec: fix tests

* implicits: remove StoredField for Numeric types

* implicits: revert remove StoredField for Numeric types

* fix more tests

* fix more tests

* [tests] fix LuceneRDDResponse .toDF()

* fix multivalued fields

* fix score type issue

* minor

* stored fields for numerics

* hotfix: TextField must be stored using StoredField

* hotfix: stringToDocument implicit

* link issue 179

* fix tests

* remove _.toRow() calls

* fix compile issue

* [sbt] update to spark 2.4.2

* [travis] use spark 2.4.2
zouzias added a commit that referenced this pull request Dec 30, 2019
* Remove unused code (#141)


* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* Revert "Revert "Setting version to 0.3.5-SNAPSHOT""

This reverts commit a6da0af.

* [build] update Lucene to 7.7.0

* Hotfix: issue 150 (#151)

* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns

* [sbt] version updates

* [scripts] fix shell

* Block linkage: allow a block linker with Row to  Query (#154)

* [linkage] block linker with => Query

* [linkage] block linker is Row => Query

* remove Query analyzer on methods

* [sbt] set version to 0.3.6-SNAPSHOT

* Feature: allow custom analyzers during compile time (#160)

* [analyzers] custom analyzer

* test return null

* [travis] travis_wait 1 min

* Revert "[travis] travis_wait 1 min"

This reverts commit c79456e.

* use lucene examples

* custom analyzer return null

* fix java reflection

* add docs

* Update to Lucene 8 (#161)

* [lucene] upgrade to version 8.0.0

* [lucene] remove ngram analyzer

* delete ngram analyzer

* minor fix

* add scaladoc

* LuceneRDDResponseSpec.collect() should work when no results are found - Issue #166 (#168)

* [sbt] update scalatest 3.0.7

* [sbt] update spark 2.4.1

* [build.sbt] add credentials file

* [plugins] update versions

* [sbt] update to 0.13.18

* Allow Lucene Analyzers per field (#164)

* [issue_163] per field analysis

* [sbt] update scalatest to 3.0.7

* [issue_163] fix docs; order of arguments

* fixes on ShapeLuceneRDD

* [issue_163] fix test

* issue_163: minor fix

* introduce LuceneRDDParams case class

* fix apply in LuceneRDDParams

* [issue_163] remove duplicate apply defn

* add extra LuceneRDD.apply

* [issue_165] throw runtime exception; use traversable trait (#170)

[issue_165] throw runtime exception; handle multi-valued fields in DataFrames

* [config] refactor; add environment variables in config (#173)

* [refactor] configuration loading

* [travis] code hygiene

* Make LuceneRDDResponse to extend RDD[Row] (#175)

* WIP

* fix tests

* remove SparkDoc class

* make test compile

* use GenericRowWithSchema

* tests: getDouble score

* score is a float

* fix casting issue with Seq[String]

* tests: LuceneDocToSparkRowpec

* tests: LuceneDocToSparkRowpec

* more tests

* LuceneDocToSparkRowpec: more tests

* LuceneDocToSparkRowpec: fix tests

* LuceneDocToSparkRow: fix Number type inference

* LuceneDocToSparkRowpec: fix tests

* implicits: remove StoredField for Numeric types

* implicits: revert remove StoredField for Numeric types

* fix more tests

* fix more tests

* [tests] fix LuceneRDDResponse .toDF()

* fix multivalued fields

* fix score type issue

* minor

* stored fields for numerics

* hotfix: TextField must be stored using StoredField

* hotfix: stringToDocument implicit

* link issue 179

* fix tests

* remove _.toRow() calls

* fix compile issue

* [sbt] update to spark 2.4.2

* [travis] use spark 2.4.2

* [build] minor updates

* Remove sbt-spark-package plugin (#181)

* [sbt] remove sbt-spark-package

* WIP

* [sbt] add spark-mllib

* [sbt] make spark provided

* update to sbt to 1.X.X (#182)

* [wip] update to sbt 1.X.X

* [travis] fix script

* [sbt] update to 1.2.8

* [sbt] update all plugins

* [sbt] spark update v2.4.3 (#183)

* [sbt] spark update v2.4.3

* minor update joda-time

* [sbt] update spark-testing

* [sbt] lucene 8.1.0 update (#184)

* [sbt] lucene update 8.1.1 (#185)

* [scalatest] update to 3.0.8

* [sbt] joda-time patch update

* [release-info] add sonatype credentials

* [sbt] lucene 8.2.0 update (#187)

* [sbt] update plugins

* [sbt] update spark 2.4.4 (#188)

* [sbt] update joda to 2.10.4

* [sbt] update joda / typesafe config (#189)

* [sbt] update Lucene 8.3.0 (#191)

* [sbt] version updates (#194)

* Update Lucene to version 8.3.1
* Update Twitter algebird to version 0.13.6
* Update scalatest/scalactic to version 3.1.0

* [github-actions] add scala.yml (#193)

* [github-actions] add scala.yml

* [sbt] update to version 1.3.3 (#195)

* [plugins] update sbt plugins (#196)

* [lucene] update version 8.4.0 (#197)

* fix version to SNAPSHOT

* code hygiene
zouzias added a commit that referenced this pull request Nov 29, 2020
* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* Revert "Revert "Setting version to 0.3.5-SNAPSHOT""

This reverts commit a6da0af.

* [build] update Lucene to 7.7.0

* Hotfix: issue 150 (#151)

* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns

* [sbt] version updates

* [scripts] fix shell

* Block linkage: allow a block linker with Row to  Query (#154)

* [linkage] block linker with => Query

* [linkage] block linker is Row => Query

* remove Query analyzer on methods

* [sbt] set version to 0.3.6-SNAPSHOT

* Feature: allow custom analyzers during compile time (#160)

* [analyzers] custom analyzer

* test return null

* [travis] travis_wait 1 min

* Revert "[travis] travis_wait 1 min"

This reverts commit c79456e.

* use lucene examples

* custom analyzer return null

* fix java reflection

* add docs

* Update to Lucene 8 (#161)

* [lucene] upgrade to version 8.0.0

* [lucene] remove ngram analyzer

* delete ngram analyzer

* minor fix

* add scaladoc

* LuceneRDDResponseSpec.collect() should work when no results are found - Issue #166 (#168)

* [sbt] update scalatest 3.0.7

* [sbt] update spark 2.4.1

* [build.sbt] add credentials file

* [plugins] update versions

* [sbt] update to 0.13.18

* Allow Lucene Analyzers per field (#164)

* [issue_163] per field analysis

* [sbt] update scalatest to 3.0.7

* [issue_163] fix docs; order of arguments

* fixes on ShapeLuceneRDD

* [issue_163] fix test

* issue_163: minor fix

* introduce LuceneRDDParams case class

* fix apply in LuceneRDDParams

* [issue_163] remove duplicate apply defn

* add extra LuceneRDD.apply

* [issue_165] throw runtime exception; use traversable trait (#170)

[issue_165] throw runtime exception; handle multi-valued fields in DataFrames

* [config] refactor; add environment variables in config (#173)

* [refactor] configuration loading

* [travis] code hygiene

* Make LuceneRDDResponse to extend RDD[Row] (#175)

* WIP

* fix tests

* remove SparkDoc class

* make test compile

* use GenericRowWithSchema

* tests: getDouble score

* score is a float

* fix casting issue with Seq[String]

* tests: LuceneDocToSparkRowpec

* tests: LuceneDocToSparkRowpec

* more tests

* LuceneDocToSparkRowpec: more tests

* LuceneDocToSparkRowpec: fix tests

* LuceneDocToSparkRow: fix Number type inference

* LuceneDocToSparkRowpec: fix tests

* implicits: remove StoredField for Numeric types

* implicits: revert remove StoredField for Numeric types

* fix more tests

* fix more tests

* [tests] fix LuceneRDDResponse .toDF()

* fix multivalued fields

* fix score type issue

* minor

* stored fields for numerics

* hotfix: TextField must be stored using StoredField

* hotfix: stringToDocument implicit

* link issue 179

* fix tests

* remove _.toRow() calls

* fix compile issue

* [sbt] update to spark 2.4.2

* [travis] use spark 2.4.2

* [build] minor updates

* Remove sbt-spark-package plugin (#181)

* [sbt] remove sbt-spark-package

* WIP

* [sbt] add spark-mllib

* [sbt] make spark provided

* update to sbt to 1.X.X (#182)

* [wip] update to sbt 1.X.X

* [travis] fix script

* [sbt] update to 1.2.8

* [sbt] update all plugins

* [sbt] spark update v2.4.3 (#183)

* [sbt] spark update v2.4.3

* minor update joda-time

* [sbt] update spark-testing

* [sbt] lucene 8.1.0 update (#184)

* [sbt] lucene update 8.1.1 (#185)

* [scalatest] update to 3.0.8

* [sbt] joda-time patch update

* [release-info] add sonatype credentials

* [sbt] lucene 8.2.0 update (#187)

* [sbt] update plugins

* [sbt] update spark 2.4.4 (#188)

* [sbt] update joda to 2.10.4

* [sbt] update joda / typesafe config (#189)

* [sbt] update Lucene 8.3.0 (#191)

* [sbt] version updates (#194)

* Update Lucene to version 8.3.1
* Update Twitter algebird to version 0.13.6
* Update scalatest/scalactic to version 3.1.0

* [github-actions] add scala.yml (#193)

* [github-actions] add scala.yml

* [sbt] update to version 1.3.3 (#195)

* [plugins] update sbt plugins (#196)

* [lucene] update version 8.4.0 (#197)

* fix version to SNAPSHOT

* code hygiene

* [sbt] update sbt-sonatype to 3.8.1

* [sbt] update sbt-scoverage to 1.6.1

* Patch updates (#222)

* patch updates

* [sbt] sbt-release update to 1.0.13

* Update spark version to 2.4.5 (#226)

* [sbt] update spark to 2.4.5

* [sbt] sbt update to 1.3.8

* [sbt] scalatest 3.1.1 (#231)

* Update spark-testing-base to 2.4.3_0.14.0 (#235)

* Revert "Setting version to 0.3.9-SNAPSHOT"

This reverts commit 82ed3b6.

* Update spark-testing-base to 2.4.3_0.14.0

Co-authored-by: Anastasios Zouzias <anastasios@sqooba.io>

* Update sbt to 1.3.9 (#236)

* Revert "Setting version to 0.3.9-SNAPSHOT"

This reverts commit 82ed3b6.

* Update sbt to 1.3.9

* Update version.sbt

Co-authored-by: Anastasios Zouzias <anastasios@sqooba.io>
Co-authored-by: Anastasios Zouzias <zouzias@users.noreply.github.com>

* Update to Spark 2.4.6 (#246)

* [sbt] update to spark 2.4.6

* fix version

* [sbt] update deps (#247)

* update deps (#248)

* [maintainance] minor updates (#252)

Update:
* jts-core to 1.17.0
* sbt to 1.3.13
* sbt-assembly to 1.15.0

* [sbt] update scalatest 3.x; minor updates

* [sbt] minor updates

* update sbt version

* update tests

* update sbt plugins

* update spark

* [sbt] minor updates (#268)

Changelog:
    Fixes #266
    Fixes #259

* updare README

Co-authored-by: Linh Nguyen <tuleism@gmail.com>
Co-authored-by: Anastasios Zouzias <anastasios@sqooba.io>
Co-authored-by: Yeikel <yeikel@users.noreply.github.com>
Co-authored-by: Scala Steward <43047562+scala-steward@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants