-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed NullPointer while creating temporary function #1384
base: master
Are you sure you want to change the base?
Commits on Oct 30, 2018
-
Snap 2503 syncing store (TIBCOSoftware#1192)
* adding commandline arg to kill on OOM * checking if system is 64 bit to apply right libgemfirexd<32/64>.so * adding agent in case of mac/linux * enable copy in case of mac also * implementing review suggestions * checking if agent load would be successful else continue without agent * implementing review suggestions * undo-ing wrong changes to gradle.properties * syncing store
Configuration menu - View commit details
-
Copy full SHA for 5575ceb - Browse repository at this point
Copy the full SHA 5575cebView commit details -
[SNAP-2659] reset pool at the end of execution (TIBCOSoftware#1191)
Reset the pool at the end of collect to avoid spillover of lowlatency pool setting to later operations that may not use the CachedDataFrame execution paths.
Sumedh Wale authoredOct 30, 2018 Configuration menu - View commit details
-
Copy full SHA for 882277f - Browse repository at this point
Copy the full SHA 882277fView commit details -
Sumedh Wale committed
Oct 30, 2018 Configuration menu - View commit details
-
Copy full SHA for 9a9b23a - Browse repository at this point
Copy the full SHA 9a9b23aView commit details -
Changed scripts to work with old bash 3.x versions
For cases like MacOSX that ships with bash 3.x by default
Sumedh Wale committedOct 30, 2018 Configuration menu - View commit details
-
Copy full SHA for 81f3cbf - Browse repository at this point
Copy the full SHA 81f3cbfView commit details
Commits on Oct 31, 2018
-
- Modified the directory creation method to exclude the PID from the …
…directory structure in order to avoid the confusion with PID of hydra JVM Vs PID of snappy node - Correcting table names in configuration file required for dmlOps tests Testing Done: Verified the changes by running few tests from HA and non-HA bts
Configuration menu - View commit details
-
Copy full SHA for ac5e5a9 - Browse repository at this point
Copy the full SHA ac5e5a9View commit details
Commits on Nov 1, 2018
-
- avoid unncessary re-evaluation of cluster properties target - redirect error of spark-shell in testSparkShell to output string for checks too - update store and spark links - remove distZip from default assemble target
Sumedh Wale committedNov 1, 2018 Configuration menu - View commit details
-
Copy full SHA for 974d6f8 - Browse repository at this point
Copy the full SHA 974d6f8View commit details
Commits on Nov 2, 2018
-
Snap 2698 ignoring failing tests in compatibilityTests (TIBCOSoftware…
…#1194) * ignoring failing tests * removing wrongly created test suite * formatting changes
Configuration menu - View commit details
-
Copy full SHA for 08ecfb1 - Browse repository at this point
Copy the full SHA 08ecfb1View commit details -
Moved the registration of function 'RefreshMetadata' before start of …
…the server. Because of the previous order there were chances that this function came for execution on this node even before it is registered.
Neeraj Kumar committedNov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for 58bc4b9 - Browse repository at this point
Copy the full SHA 58bc4b9View commit details -
- Resolved relative path issue in addDependentJarsToProp function. Th…
…ere was an issue with running snappy job with --packages option ouside the snappydata build directory.
Configuration menu - View commit details
-
Copy full SHA for e8e4531 - Browse repository at this point
Copy the full SHA e8e4531View commit details -
[SNAP-2646] make a copy for non-primitive aggregations (TIBCOSoftware…
…#1195) Key-based aggregations (GROUP BY) already handle copying of incoming value but was missing in non-key flat aggregations. ## Changes proposed in this pull request - refactored value copy code in ObjectHashMapAccessor and string clone code (only if required) - use above for non-key aggregations too - renamed io.snappydata.implicits to io.snappydata.sql.implicits - handle null values during clone/copy of non-primitive aggregate results
Sumedh Wale authoredNov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for 481c206 - Browse repository at this point
Copy the full SHA 481c206View commit details -
Amogh Shetkar committed
Nov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for 5c7962c - Browse repository at this point
Copy the full SHA 5c7962cView commit details -
suranjan kumar committed
Nov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for bf67dcf - Browse repository at this point
Copy the full SHA bf67dcfView commit details -
correct snappy-jdbc module shadowJar name and add to archives
Sumedh Wale committedNov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for c072147 - Browse repository at this point
Copy the full SHA c072147View commit details -
Publish shadowJar for snappy-jdbc
Sumedh Wale committedNov 2, 2018 Configuration menu - View commit details
-
Copy full SHA for b5bbea3 - Browse repository at this point
Copy the full SHA b5bbea3View commit details
Commits on Nov 3, 2018
-
Fixed failures in compatibilityTests
Also corrected the copyright headers to be SnappyData ones.
Sumedh Wale committedNov 3, 2018 Configuration menu - View commit details
-
Copy full SHA for 3158df6 - Browse repository at this point
Copy the full SHA 3158df6View commit details
Commits on Nov 4, 2018
-
Add SnappyData copyright header to log4j.properties files
Sumedh Wale committedNov 4, 2018 Configuration menu - View commit details
-
Copy full SHA for 1a1e74f - Browse repository at this point
Copy the full SHA 1a1e74fView commit details -
Use session property for snappydata.connection in JDBC extensions
Sumedh Wale committedNov 4, 2018 Configuration menu - View commit details
-
Copy full SHA for bae3a67 - Browse repository at this point
Copy the full SHA bae3a67View commit details -
Slight modification in the replace script.
Changed the year from 2017 to 2018 in license headers.
Neeraj Kumar committedNov 4, 2018 Configuration menu - View commit details
-
Copy full SHA for 702e087 - Browse repository at this point
Copy the full SHA 702e087View commit details -
Clear SnappyHiveCatalog cache for the calling paths from SnappyExtern…
…alCatalog - for paths from SnappyExternalCatalog (e.g. "CREATE VIEW"), the catalog cache needs to be cleared too; added unit test for this too - fixing some occasional failures due to test issues - renamed SnappyTableStatsProviderService.suspendCacheInvalidation to TEST_SUSPEND_CACHE_INVALIDATION to indicate clearly it is meant only for tests
Sumedh Wale committedNov 4, 2018 Configuration menu - View commit details
-
Copy full SHA for 3c1469e - Browse repository at this point
Copy the full SHA 3c1469eView commit details -
Fixed hive meta-store client configuration in SnappyHiveThriftServer2
When using SnappySession, the default, the temporary hive configuration passed is not just used by HiveServer2, but also overrides the internal hive configuration used by SnappyStoreHiveCatalog causing problems. Now using SnappySession hive configuration after adding the "hive.server2" configuration read from the temporary "executionHive" client (that in turn will set it up using hive-site.xml etc that are ignored by SnappySession's hive meta-store client) Also reduced logging during the temporary hive client initialization.
Sumedh Wale committedNov 4, 2018 Configuration menu - View commit details
-
Copy full SHA for d6c7edd - Browse repository at this point
Copy the full SHA d6c7eddView commit details
Commits on Nov 5, 2018
-
Added "snappydata.sql.hiveCompatible" property
- new "snappydata.sql.hiveCompatible" to turn some SQL output to be more hive compatible; currently this includes "show tables ..." variants that have only one "name" column in output - added unit tests for above property and "show tables ..." variants
Sumedh Wale committedNov 5, 2018 Configuration menu - View commit details
-
Copy full SHA for e756f24 - Browse repository at this point
Copy the full SHA e756f24View commit details -
Minor changes to collect-debug-srtifacts script to pull .jmap and
.jvmkill*.log files. .hprof can be pulled by passing an option '-m' or '--hprofdump' to the script.
Neeraj Kumar committedNov 5, 2018 Configuration menu - View commit details
-
Copy full SHA for c0a0363 - Browse repository at this point
Copy the full SHA c0a0363View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1eff5d8 - Browse repository at this point
Copy the full SHA 1eff5d8View commit details
Commits on Nov 6, 2018
-
Docv102delta (TIBCOSoftware#1197)
Hot Fix Changes Updates to backlog doc items New Spark Extension API Guide
Configuration menu - View commit details
-
Copy full SHA for 9a6c84b - Browse repository at this point
Copy the full SHA 9a6c84bView commit details -
* Update the artifact name and version of snappydata jdbc dependency.
Amogh Shetkar committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for a472048 - Browse repository at this point
Copy the full SHA a472048View commit details -
Include release notes for 1.0.2.1
Lizy committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for 7b3989f - Browse repository at this point
Copy the full SHA 7b3989fView commit details -
Lizy committed
Nov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for 3a4f481 - Browse repository at this point
Copy the full SHA 3a4f481View commit details -
Changed Archived to Doc Archives in documentation index file.
Lizy committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for 357ce93 - Browse repository at this point
Copy the full SHA 357ce93View commit details -
Update snappydata-jdbc pom to exclude all dependencies
Jar published for snappydata-jdbc is shadow one that includes all dependencies
Amogh Shetkar committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for cda1d7d - Browse repository at this point
Copy the full SHA cda1d7dView commit details -
* Update the Zeppelin interpreter version.
Amogh Shetkar committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for 627786e - Browse repository at this point
Copy the full SHA 627786eView commit details -
remove snappydata-jdbc from product
Sumedh Wale committedNov 6, 2018 3Configuration menu - View commit details
-
Copy full SHA for 9c58e0f - Browse repository at this point
Copy the full SHA 9c58e0fView commit details -
* Updated Zeppelin interpreter version to be used.
Amogh Shetkar committedNov 6, 2018 Configuration menu - View commit details
-
Copy full SHA for a1f1a1b - Browse repository at this point
Copy the full SHA a1f1a1bView commit details
Commits on Nov 9, 2018
-
SNAP-2602 : On snappy UI, add column named "Overflown Size"/ "Disk Si…
…ze" in Tables. (TIBCOSoftware#1185) * Changes for SNAP-2602: - HTML and CSS changes for displaying tables overflown size to disk as Spill-To-Disk Size on UI. - TableSummary is updated to hold tables spillover size to disk. - SnappyTableStatsProviderDUnitTest is updated accordingly. - and relevant code changes at other places. - Named column as Spill-To-Disk Size for tables spillover size to disk. - Renaming column Memory Size to In-Memory Size.
Configuration menu - View commit details
-
Copy full SHA for 31b058b - Browse repository at this point
Copy the full SHA 31b058bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 485f2eb - Browse repository at this point
Copy the full SHA 485f2ebView commit details
Commits on Nov 13, 2018
-
Clean up and optimization in cdc tests (TIBCOSoftware#1199)
- Cleanup and formatting changes in CDC tests.
Supriya Pillai authoredNov 13, 2018 Configuration menu - View commit details
-
Copy full SHA for 4f2924c - Browse repository at this point
Copy the full SHA 4f2924cView commit details
Commits on Nov 14, 2018
-
[SNAP-2462] enable common-subexpression elimination for ParamLiterals (…
…TIBCOSoftware#1198) ParamLiterals are not constants, so common-subexpression elimination phases of optimizer may fail when values inside them are actually equal. ## Changes proposed in this pull request This adds a new RefParamLiteral that points to a ParamLiteral which is used when a constant value is found that is same as one found before. The hashCode mirrors that of original ParamLiteral while equals is reference equality with original. This enables a ParamLiteral to be equal to a RefParamLiteral in a different position. When considering two different plans for equality, the RefParamLiterals have to be in exactly same positions to be evaluated as equal since RefParamLiterals are never equal to other ParamLiterals. For two RefParamLiterals, they are equal iff their original ParamLiterals are. This allows expressions like "a = 4 and b = 4" and "a = 6 and b = 6" to be equal after tokenization in plan caching, but not "a = 4 and b = 4" and "a = 6 and b = 5" which will have different plans. Also added a test for above including check for filter push down.
Sumedh Wale authoredNov 14, 2018 Configuration menu - View commit details
-
Copy full SHA for ca25223 - Browse repository at this point
Copy the full SHA ca25223View commit details -
[SNAP-2389] NPE during lead failure/restart
Sumedh Wale committedNov 14, 2018 Configuration menu - View commit details
-
Copy full SHA for 7b7871e - Browse repository at this point
Copy the full SHA 7b7871eView commit details -
Changes for SNAP-2612: (TIBCOSoftware#1174)
- Updating variable into ExternalTableSummary to hold tables fully qualified name. - Set tables schema and fully qualified names while creating SnappyExternalTableStats objects.
Configuration menu - View commit details
-
Copy full SHA for 85e2f40 - Browse repository at this point
Copy the full SHA 85e2f40View commit details -
Configuration menu - View commit details
-
Copy full SHA for 37aad74 - Browse repository at this point
Copy the full SHA 37aad74View commit details
Commits on Nov 16, 2018
-
[SNAP 2634] - providing snappy data specific strategies in Incrementa…
…lExecution (TIBCOSoftware#1201) As part of these changes, we have removed Snappy's DefaultPlanner and instead of passing Snappydata specific strategies as part of `experimentalMethods.extraStrategies` of `SnappySessionState`. Also, SnappyAggregationStrategy is disabled for stream queries as it is clashing with StatefulAggregationStrategy which is added by Spark as part of `IncrementalExecution`. This is achieved by setting snappydata.sql.hashAggregateSize to -1 in SnappySession's configuration. This implies that Snappydata aggregation optimization will be turned off for any usage of that particular SnappySession even for non-streaming queries.
Configuration menu - View commit details
-
Copy full SHA for 43065d5 - Browse repository at this point
Copy the full SHA 43065d5View commit details
Commits on Nov 17, 2018
-
Fix an occasional failure in smoke test due to ignorable exception fr…
…om hive meta-store client
Sumedh Wale committedNov 17, 2018 Configuration menu - View commit details
-
Copy full SHA for 8d1ca85 - Browse repository at this point
Copy the full SHA 8d1ca85View commit details -
Fixing some meta-data query inconsistencies
- add support for SHOW VIEWS - use "schemaName" for the column instead of Spark's "database" in SHOW TABLES - show CHAR/VARCHAR types instead of STRING for those types of columns in meta-data queries
Sumedh Wale committedNov 17, 2018 Configuration menu - View commit details
-
Copy full SHA for 5601184 - Browse repository at this point
Copy the full SHA 5601184View commit details
Commits on Nov 19, 2018
-
Disabled plan caching for LocalTableScanExec as well. (TIBCOSoftware#…
…1202) * Disabled plan caching for LocalTableScanExec and RDDScanExec. Fixes SNAP-2712.
kneeraj authoredNov 19, 2018 Configuration menu - View commit details
-
Copy full SHA for 3cfe7c3 - Browse repository at this point
Copy the full SHA 3cfe7c3View commit details -
Minor test change. Missed in last merge.
Neeraj Kumar committedNov 19, 2018 Configuration menu - View commit details
-
Copy full SHA for db3c3d0 - Browse repository at this point
Copy the full SHA db3c3d0View commit details
Commits on Nov 20, 2018
-
ClusterRecovery test cases. (TIBCOSoftware#1049)
Added following tests cases to test cluster recovery in the following cases: 1)When server/lead as a new node is started first in the cluster. 2)When new node is added and rebalance along with abrupt server kill is called in parallel. 3)When any of the nodes in the cluster crashes or gets OOM/Low Memory exception, along with the 2nd test case. 4)When the cluster is restarted with min memory. 5)When all locators are down and a new node tries to join the cluster along with rebalance. 5)When collocated tables are dropped in wrong order/right order and servers are killed abruptly.
Supriya Pillai authoredNov 20, 2018 Configuration menu - View commit details
-
Copy full SHA for 094dffc - Browse repository at this point
Copy the full SHA 094dffcView commit details
Commits on Nov 21, 2018
-
Configuration menu - View commit details
-
Copy full SHA for af77ce5 - Browse repository at this point
Copy the full SHA af77ce5View commit details
Commits on Nov 26, 2018
-
Additional tests for streaming_sink feature and test changes to have …
…derby for validation. (TIBCOSoftware#1178) * - Increasing wait time after lead HA. * - Test changes to send tableName as argument to snappy job. * - Adding support for having the same operation as in streaming in the derby to be performed parallelly. This is for data validation. * - Changing modifier for derby methods. - Rearranging the CLOSETASK to get proper execution sequence. * - Adding a method in kafka producer to call it from hydra. * - removing eventType from row as it is getting added to derby as column. * - Modify the streaming_sink test to use derby validation. * - Adding test for conflation testing - with small set of keys with repeated operartions on same key. * - Adding test for generic column names * - Making changes to the Message producer to run multi threaded. * - Changing the number of events to send from kafka producer. * - Renaming the replicated row table test bt.
Configuration menu - View commit details
-
Copy full SHA for 2b983ab - Browse repository at this point
Copy the full SHA 2b983abView commit details -
SNAP-2723 : streaming compatibility tests (TIBCOSoftware#1206)
- Adding some more compatibility tests for structured streaming - Enabling few passing tests which were ignored earlier
Configuration menu - View commit details
-
Copy full SHA for 70110af - Browse repository at this point
Copy the full SHA 70110afView commit details
Commits on Nov 27, 2018
-
[SNAP-2381] global lock to serialize concurrent puts (TIBCOSoftware#1056
) * [SNAP-2381] global lock to serialize concurrent puts - global lock in putInto execution to serialize concurrent puts - added lock to session context and clear in putInto plan's executeCollect * Avoid region lock in case of smart connector mode
Sumedh Wale authored and suranjan kumar committedNov 27, 2018 Configuration menu - View commit details
-
Copy full SHA for aa2c337 - Browse repository at this point
Copy the full SHA aa2c337View commit details
Commits on Nov 28, 2018
-
Docv1.0.2.1 delta (TIBCOSoftware#1207)
* Added links in the release note corresponding to the items that have supporting documentation. * Incorporated changes to Spark Extension API Guide * Added How-to topic for connecting tableau to SnappyData. * Edited the Setting up OBDC Driver and Tableau chapter and moved the Tableau content to the how-to topic. * Incorporated review comments. * Add description for server-auth-provider. * Added a section on Handling OOM Error with jvm kill agent in the best practices section. * Added security based syntax and examples for Command Line Utilities. * Added syntax and corresponding examples for PUT INTO in column tables and row tables. Added a warning in column tables for the appropriate usage of syntax.
Configuration menu - View commit details
-
Copy full SHA for 7af9afe - Browse repository at this point
Copy the full SHA 7af9afeView commit details
Commits on Nov 29, 2018
-
SNAP-2661 : Provide Snappy UI User a control over Auto Update (TIBCOS…
…oftware#1193) * Changes for SNAP-2661 : Provide Snappy UI User a control over Auto Update - Adding HTML code changes for Auto Update ON/OFF Switch on Snappy UI (Dashboard and Member Details page).
Configuration menu - View commit details
-
Copy full SHA for 07e7f2a - Browse repository at this point
Copy the full SHA 07e7f2aView commit details -
Configuration menu - View commit details
-
Copy full SHA for b27d296 - Browse repository at this point
Copy the full SHA b27d296View commit details
Commits on Nov 30, 2018
-
Remove framework defaults for eviction and critical heap percentage (T…
…IBCOSoftware#1203) - Removed framework defaults for eviction and critical heap percentage, instead tests will use product defaults in case not provided - Fixed an issue with installJar bt run. The bt started failing after recent code factoring changes - Added hydra test coverage to simulate a memory leak issue reported by the user with concurrent put into and select operations.
Configuration menu - View commit details
-
Copy full SHA for 51d560b - Browse repository at this point
Copy the full SHA 51d560bView commit details -
Configuration menu - View commit details
-
Copy full SHA for fe194f7 - Browse repository at this point
Copy the full SHA fe194f7View commit details
Commits on Dec 5, 2018
-
Automated hydra test to calculate load time for tables in different w…
…ays (TIBCOSoftware#1210) * - Automated hydra test to calculate load time for tables in different ways(csv/parquet,dataFrame/externalTable).
Configuration menu - View commit details
-
Copy full SHA for af298af - Browse repository at this point
Copy the full SHA af298afView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4e0beb0 - Browse repository at this point
Copy the full SHA 4e0beb0View commit details
Commits on Dec 7, 2018
-
Store the temporary join result in offheap (TIBCOSoftware#1214)
- check for members with offheap settings persist the default CACHE TABLE result in offheap - instead of onDemand, use already stored stats
suranjan kumar authored and Sumedh Wale committedDec 7, 2018 Configuration menu - View commit details
-
Copy full SHA for 80f2d30 - Browse repository at this point
Copy the full SHA 80f2d30View commit details -
Fixes for issues seen in different putInto tests (TIBCOSoftware#1213)
- switched all intermediate arrays in decoders/encoders to use BufferAllocator so that it can be accounted propertly (only in off-heap config for now) - added proper close methods everywhere for both encoders and decoders for above and invoke the close in ColumnTableScan generated code - force a repartition on the implicit BATCHID column in column update/delete when there is no explicit partitioning in the table so that all changes for a batch are together and get applied as a single delta rather than spread across all partitions - use KEY_COLUMNS as partitioning columns if none have been specified expliciitly - use SnappyCacheTableCommand for putInto intermediate cache to show the plan of cache properly - moved task interrupt checks to both column and row iterators so that task execution can stop quickly; removed the check from ColumnTableScan generated code - drop temp table when no caching is done - reduce default spark.memory.fraction to 0.85 - project on only the table columns (and implicit batchId columns) before caching to reduce cache size - use the cached intermediate result for both insert and update parts of putInto (instead of just former) Fix for InternalGemFireError seen with putIntos Reason was that ColumnDelta.apply could return original value without changing anything for stats rows, but it was returning the result of getValueRetain call (which could be a temporary copy) instead of original region value. This could lead to the temporary value being released completely after writing to Oplog so that the value put in region will have incorrect reference count leading to a cascading set of problems. The specific InternalGemFireError issue is due to the fact that the internal buffer of this released value was changed during released to an empty one without doing any accounting for the same. Fix for SNAP-2756. In some rare cases a value might get completely released before a thread reaches the second sync block in ColumnFormatValue.decompressValue. For such cases return null as the result value (caller will read from disk if required). Some cleanups in ColumnFormatEntry after code reviews. Log orignal DDL string (constructed) for column table creates Test fixes - fixed a failure in SplitSnappyClusterDUnitTest.testRowTableCreation due to "ANONYMOUS" schema created in previous embedded thrift server test - removed obsolete ResolveAggregationExpressions which is now handled by RefParamLiteral - moved snappy-core dependency above for kafka tests to work from IDEA due to conflicting class KafkaTestUtils getting picked from spark instead - fixing failing test: a column table must have at-least one extra column apart from partition column to perform put into. With recent code changes, we are making key columns as partition columns if partition columns are not explicitly specified. Hence the test started failing as it was having only one column and the same column is declared as key column as key columns are now treated as partition columns.
Sumedh Wale authoredDec 7, 2018 Configuration menu - View commit details
-
Copy full SHA for 6dc0a7d - Browse repository at this point
Copy the full SHA 6dc0a7dView commit details
Commits on Dec 12, 2018
-
[SNAP-2773] improve GUI display of put into (TIBCOSoftware#1217)
- pass the query string along to SnappyCacheTableCommand to display in GUI - removed the extra DS.count() and instead pass back the count of cached batches from cache query - put into will pass "CACHE FOR (sql)" string if SQL is available else use "PUT INTO <plan>" when using putInto API - also changed CACHE TABLE <table> without query to use off-heap route if enabled, and display the full plan in GUI as done for the query case - set SQL text as Spark job description to show in the jobs/stages tabs in the UI; set and clear the SparkContext job description in pairs else it can get carried over to next execution (e.g. API execution with smart connector or zeppelin that does not use SnappySession.sqlPlan) - avoid showing two plans for CTAS and CACHE QUERY statements - skip the empty second plan for ExecutedCommandExec and only the insertion plan is shown
Sumedh Wale authoredDec 12, 2018 Configuration menu - View commit details
-
Copy full SHA for 28191cb - Browse repository at this point
Copy the full SHA 28191cbView commit details
Commits on Dec 17, 2018
-
SNAP-2745 - In streaming_sink with conflation enabled, a delete opera…
…tion packed between two inserts results into extraneous records - if event type of the last event for a key is insert and there are more than one events for the same key, then convert inserts to put into - skipping caching(and materializing) of incoming dataframe when target tables don't contain key_columns/primary_keys as all events will be treated as inserts in this case.
Configuration menu - View commit details
-
Copy full SHA for 67d0bd7 - Browse repository at this point
Copy the full SHA 67d0bd7View commit details -
Test changes for simulating OOME reported by a user (TIBCOSoftware#1218)
- Modified the schema in the existing putInto long-running test to include 30 columns including most of the string columns - In this test, 20 columns will be having valid values and 10 columns will be having null values - Key_columns is specified by a composite key including two string columns - Added insert ops in the test with using data from JSON files at regular intervals. - Added a job to generate the required number of JSON data files containing unique values for key_columns for a user-specified schema - Framework changes to remove parameters added explicitly for taking heap dump on OOME - Reduced default SleepTimeSecsForJobStatus and SleepTimeSecsForMemberStatus to 5 secs from 120 secs and 30 secs respectively in order to minimize the test execution time
Configuration menu - View commit details
-
Copy full SHA for dc03e72 - Browse repository at this point
Copy the full SHA dc03e72View commit details
Commits on Dec 18, 2018
-
RefParamLiteral being checked for equality against another RefParamLi…
…teral (TIBCOSoftware#1212) RefParamLiteral of RefParamLiteral was being created so referenceEquals was returning false. Correct the search to ensure that RefParamLiterals are skipped and also added an assertion that RefParamLiteral should never wrap another RefParamLiteral. Added a test with views to check above.
Configuration menu - View commit details
-
Copy full SHA for 478aecf - Browse repository at this point
Copy the full SHA 478aecfView commit details -
Catalog cleanup and integration with Spark's HiveExternalCatalog (TIB…
…COSoftware#1219) This replaces the current session catalog implementations (SnappyStoreHiveCatalog, SnappyConnectorExternalCatalog) with a single new implementation SnappySessionCatalog that is common for both embedded and connector modes. All real work has now been shifted to ExternalCatalog implementations SnappyHiveExternalCatalog and ConnectorExternalCatalog respectively. Former now extends HiveExternalCatalog and has minimal overrides to behave as close to Spark as possible. Smart connector catalog has been re-written to use two common procedures to get and update catalog meta-data passing generic arguments encapsulated in thrift classes. The older separate procedures for DDL executions are now all removed. It no longer uses a hive client for queries either. These changes allow the smart connector to execute all kinds of catalog updates available in the Spark interface including thing like altering buckets/partitions for Spark parquet/ORC tables. ## Other changes proposed in this pull request - enabled a bunch of compatibility tests; this will closed multiple open issues that will be done separately when this branch merges - added implicit retry for catalog stale exception in queries - invalidate entire cache of connector for a create/drop/alter since the version stored for other relations in RelationInfo will also certainly be stale Replace koloboke maps with eclipse collections: - The koloboke project has been dead and unmaintained for a couple of years now so replaced with eclipse collections though latter are a bit slower for some operations and also add significant bulk (~10M). - change tokenize to be session property rather than global - fixed issue with update sub-query due to alias removal - clear global view catalog explicitly in close - allow for absence of baseTable in external catalog table drop since it can be a temporary table - add gemfire to the providers for which dbtable is added implicitly - add special path for "gemfire" data source: Allow for "gemfire" data source to make a catalog entry during create table execution in its createRelation itself. It needs the creation to add new parameters to the options bag. - fixed dependent handling to avoid duplicates
Sumedh Wale authoredDec 18, 2018 Configuration menu - View commit details
-
Copy full SHA for a091fc4 - Browse repository at this point
Copy the full SHA a091fc4View commit details
Commits on Dec 20, 2018
-
[SNAP-2751] Enable connecting to secure SnappyData via Thrift server (T…
…IBCOSoftware#1211) * * Changes from @sumwale to set the credentials from thrift layer into session conf. * Also, adding a property to trigger authentication when set. * Unit test. * * Moved properties to enable hive server start from QueryRoutingDUnitSecurityTest to ClusterManagerLDAPTestBase so that the test does not fail when run in a suite. * Replaced null for username, password with empty strings.
Configuration menu - View commit details
-
Copy full SHA for ce7bd49 - Browse repository at this point
Copy the full SHA ce7bd49View commit details -
Fixing missing "dbtable" property when recovering from older meta-data
Sumedh Wale committedDec 20, 2018 Configuration menu - View commit details
-
Copy full SHA for 0b78bdd - Browse repository at this point
Copy the full SHA 0b78bddView commit details
Commits on Dec 21, 2018
-
[SNAP-2789] implement sameResult for table scans (TIBCOSoftware#1223)
Broadcast/exchange reuse was not happening for column/row tables because the scans had RDD and BaseRelation objects in their case classes so the equality match against similar scan but having different objects for those two failed. Changes proposed in this pull request - added overrides for sameResult to both column and table scans that depends only on the table name and schema of the output projection - added a unit test for checking reuse of exchange/broadcast in QueryTest - corrected the code to avoid two execution display in SQL tab for CTAS (was missing the check against CreateTableUsingCommand) - enhanced the data path in ExternalTableMetadata to include locationUri's for file based stores, URL for JDBC sources masking the password portion - refactored catalog classes to split out into public and impl classes cleanly so that connector v2 implementation can use it
Sumedh Wale committedDec 21, 2018 Configuration menu - View commit details
-
Copy full SHA for 521480f - Browse repository at this point
Copy the full SHA 521480fView commit details
Commits on Dec 22, 2018
-
Sumedh Wale committed
Dec 22, 2018 Configuration menu - View commit details
-
Copy full SHA for ccf00d7 - Browse repository at this point
Copy the full SHA ccf00d7View commit details -
[SNAP-2790] check valid key_columns in table create (TIBCOSoftware#1225)
- call "pruneSchema" for key_columns that checks for presence of columns - fail if key_columns is specified for a row table - change Seq.empty to Nil
Sumedh Wale authoredDec 22, 2018 Configuration menu - View commit details
-
Copy full SHA for 1f81fea - Browse repository at this point
Copy the full SHA 1f81feaView commit details -
Fix a deadlock between store catalog and spark one in local mode
Sumedh Wale committedDec 22, 2018 Configuration menu - View commit details
-
Copy full SHA for 5d41bf9 - Browse repository at this point
Copy the full SHA 5d41bf9View commit details
Commits on Dec 28, 2018
-
- updated build files for gradle 5.0 - updated dependency versions to compatible recent ones where possible - changed all tests to avoid using println/show as far as possible and use logInfo instead; this is to avoid polluting the standard output with all kinds of information - updated NOTICE file with the current dependency jar versions
Sumedh Wale committedDec 28, 2018 Configuration menu - View commit details
-
Copy full SHA for 90f3581 - Browse repository at this point
Copy the full SHA 90f3581View commit details -
fixed the refresh policy on ldap group refresh, so as not to apply to…
… tables other than of type policy.
Configuration menu - View commit details
-
Copy full SHA for 8e08267 - Browse repository at this point
Copy the full SHA 8e08267View commit details -
[SNAP-2818] trim the JOB_DESCRIPTION property in Spark jobs (TIBCOSof…
…tware#1227) - previous changes set SparkContext.SPARK_JOB_DESCRIPTION property to the query string for SnappySession.sql executions but this can exceed 32K and property will fail in serialization, so trim it to 100 size with "..." continuation like done in SQL tab display - added large view test to ViewTest and enhanced it to accept a generic "String => DataFrame" closure so that the same can be used for scala tests for SnappySession as well as dunits for JDBC Statement.execute; added the same tests to DDLRoutingDUnitTest using this Note: the above test is unable the reproduce the original issue with CREATE VIEW but it does reproduce for a large query string - disallow CREATE INDEX creation on column tables without experimental-features property - clear catalog cache in shutdown to avoid its accidental use by subsequent tests
Sumedh Wale authoredDec 28, 2018 Configuration menu - View commit details
-
Copy full SHA for 9299a80 - Browse repository at this point
Copy the full SHA 9299a80View commit details
Commits on Dec 29, 2018
-
exclude old servlet-api which might conflict with javax.servlet-api
also change javax.servlet-api version to 3.0.1
Sumedh Wale committedDec 29, 2018 Configuration menu - View commit details
-
Copy full SHA for 97fdc4e - Browse repository at this point
Copy the full SHA 97fdc4eView commit details -
Improve coverage in few tests and reduce their running time
Sumedh Wale committedDec 29, 2018 Configuration menu - View commit details
-
Copy full SHA for 1cea2eb - Browse repository at this point
Copy the full SHA 1cea2ebView commit details
Commits on Jan 2, 2019
-
removing 1995-2015_ParquetEmptyData from repository
Sumedh Wale committedJan 2, 2019 Configuration menu - View commit details
-
Copy full SHA for adb5966 - Browse repository at this point
Copy the full SHA adb5966View commit details -
SNAP-2807 - skipping putInto and deleteFrom operation in the sink if …
…incoming batch doesn't contain any update or delete events (TIBCOSoftware#1229)
Configuration menu - View commit details
-
Copy full SHA for 930defa - Browse repository at this point
Copy the full SHA 930defaView commit details
Commits on Jan 4, 2019
-
Hydra test coverage for complex data types (TIBCOSoftware#1221)
- Create table with one of the column as ARRAY TYPE and run the queries. - Create table with multiple MAP TYPE columns and run the queries. - Create table with one of the column as STRUCT TYPE and run the queries. - Create table with one of the column as ARRAY Of STRUCT and run the queries. - Create table with one column as ARRAY TYPE, second column as MAP TYPE and third column as STRUCT TYPE. - Create table with one of the column as MAP TYPE[String as Key, Array as Value] and run the queries. - Create the complexType SQL scripts to test the above cases with Snappy-Shell. Will add the Hydra test later on. Currently park the Snappy-Shell (JDBC Hydra Test) on low priority.
Configuration menu - View commit details
-
Copy full SHA for 45a6442 - Browse repository at this point
Copy the full SHA 45a6442View commit details -
Separated Encoders module from the core. (TIBCOSoftware#1228)
- Separating out the encoders and decoders classes from core a separate encoders module. This will help in accessing encoders from different connectors (e.g. V2 Connector in stock spark case) - A separate jar is available of the encoders which will work across the spark versions.
Configuration menu - View commit details
-
Copy full SHA for c5d0c7a - Browse repository at this point
Copy the full SHA c5d0c7aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 2abf32d - Browse repository at this point
Copy the full SHA 2abf32dView commit details
Commits on Jan 8, 2019
-
Docv1.0.2.1 temp (TIBCOSoftware#1233)
Changes for Auto-Refresh button on SnappyData pulse and misc changes. Edits for grammar and language and consistency Changes in the properties section. Section created: Important Settings > SnappyData Smart Connector Mode and Local Mode Settings for Handling Out-of-Memory Error in SnappyData Cluster.
Configuration menu - View commit details
-
Copy full SHA for 7ab5272 - Browse repository at this point
Copy the full SHA 7ab5272View commit details
Commits on Jan 10, 2019
-
Docv1.0.2.1 temp (TIBCOSoftware#1236)
Added content in Accessing SnappyData Tables from any Spark (2.1+) Cluster topic in the Programming guide section.
Configuration menu - View commit details
-
Copy full SHA for 5b27909 - Browse repository at this point
Copy the full SHA 5b27909View commit details -
Docv1.0.2.1 temp (TIBCOSoftware#1237)
* Updates done for SNAP-2651 misc documentation changes. * Changed "class SnappySampleJob extends SnappySQLJob to "class SnappySampleJob extends JavaSnappySQLJob as suggested by Vatsal. * Add -critical-off-heap-percentage and -eviction-off-heap-percentage properties to Configuring Servers section. Plus edits for language, formats, and consistency. * edits to content for handling large size tableau extract results and minor edits to list of properties. * Add topic in the Programming section > Using SnappyData for any Spark Distribution, Adding a topic in How to > How to use SnappyData for any Spark Distribution. Add corresponding links in the programming guide. Add a corresponding entry in mkdocs.yml. * Edited content in Important Settings > SnappyData Smart Connector Mode and Local Mode Settings > Handling Out-of-Memory Error in SnappyData Cluster to include details about the timeout properties for jvmkill and heap dump. * edited eviction-heap-percentage and log-size file properties in both the topics. * Updates to include Auto-Refresh switch on the SnappyData Pulse page. Some minor language and grammar edits. * Add Spark jobserver properties and other properties as specified in SNAP-2538. * Updates to Supported Datatypes guide and a new "how-to store retrieve complex datatypes" topic based on inputs from Shirish. * Edits and changes to supported data types guide and to the related How-to * Changed default buckets to 8 from 128. * Add content for topic Access SnappyData Tables from any Spark (2.1+) cluster.
Configuration menu - View commit details
-
Copy full SHA for dc1ff2e - Browse repository at this point
Copy the full SHA dc1ff2eView commit details
Commits on Jan 11, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 2d9e4a4 - Browse repository at this point
Copy the full SHA 2d9e4a4View commit details
Commits on Jan 15, 2019
-
Fix backward compatibility issues with sample tables
- add schema name to BASETABLE if it is not fully qualified - explicitly add weightage column at catalog level if not present - delete existing temporary hive directory on restart
Sumedh Wale committedJan 15, 2019 Configuration menu - View commit details
-
Copy full SHA for 7a7e902 - Browse repository at this point
Copy the full SHA 7a7e902View commit details
Commits on Jan 17, 2019
-
Fixing issues seen in hydra test after gradle upgrade 5.0 (TIBCOSoftw…
…are#1232) * - Fixing path for store test jar. * - Adding code to check and ignore the .01 decimal mismatch in case rows in validation do not match. * - Fixing decimal precision issue for columns as in ct tables aggregate query values were returned as 'NULL'. * - Fixing issue when snappy job could not be resubmitted after leadHA, as the old lead details were used. * - Removing commented query * - Rounding off the decimal difference before comparison.
Configuration menu - View commit details
-
Copy full SHA for e87cf3c - Browse repository at this point
Copy the full SHA e87cf3cView commit details
Commits on Jan 21, 2019
-
SNAP-2716 - considering alias name as part of the normalizedPlan used…
… to compare CachedKey (TIBCOSoftware#1243)
Configuration menu - View commit details
-
Copy full SHA for 0e025cf - Browse repository at this point
Copy the full SHA 0e025cfView commit details
Commits on Jan 22, 2019
-
- Using "create" instead of "put" - solves unique alias name issue (T…
…IBCOSoftware#1247) Fix to enforce unique alias names to packages - deploy package command
Configuration menu - View commit details
-
Copy full SHA for b47944b - Browse repository at this point
Copy the full SHA b47944bView commit details
Commits on Jan 23, 2019
-
Revert "- Using "create" instead of "put" - solves unique alias name …
…issue (TIBCOSoftware#1247)" This reverts commit b47944b.
Paresh-Snappy committedJan 23, 2019 Configuration menu - View commit details
-
Copy full SHA for 305b88e - Browse repository at this point
Copy the full SHA 305b88eView commit details
Commits on Jan 24, 2019
-
[SNAP-2869] handling conflation when _eventType column is not availab…
…le in input (TIBCOSoftware#1240) - skipping _eventType column handling from conflation logic when input dataframe doesn't contain _eventType column - throwing AnalysisException when target table doesn't contain key columns or primary key
Configuration menu - View commit details
-
Copy full SHA for 709cc6a - Browse repository at this point
Copy the full SHA 709cc6aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 1cc0d7e - Browse repository at this point
Copy the full SHA 1cc0d7eView commit details -
Configuration menu - View commit details
-
Copy full SHA for df0e294 - Browse repository at this point
Copy the full SHA df0e294View commit details
Commits on Feb 4, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 39dfa7a - Browse repository at this point
Copy the full SHA 39dfa7aView commit details -
Fix for filter push down to scan level when IN list has constants
but in Cast node.
Neeraj Kumar committedFeb 4, 2019 Configuration menu - View commit details
-
Copy full SHA for 1630148 - Browse repository at this point
Copy the full SHA 1630148View commit details -
Configuration menu - View commit details
-
Copy full SHA for 755a5c7 - Browse repository at this point
Copy the full SHA 755a5c7View commit details -
Configuration menu - View commit details
-
Copy full SHA for 096c418 - Browse repository at this point
Copy the full SHA 096c418View commit details -
Configuration menu - View commit details
-
Copy full SHA for dd080ab - Browse repository at this point
Copy the full SHA dd080abView commit details
Commits on Feb 5, 2019
-
Configuration menu - View commit details
-
Copy full SHA for eb6cb8e - Browse repository at this point
Copy the full SHA eb6cb8eView commit details -
SNAP-2719 and SNAP-2457 - falling back to uncached SQL flow for strea…
…ming queries and applying plan caching only for JDBC queries (TIBCOSoftware#1205) - SNAP-2719 - falling back to uncached SQL flow for streaming queries - SNAP-2719 renaming sqlCached method to sqlInternal as sqlCached is misleading considering the fact that disabling plan caching can still make that method behave in the uncached manor. Also making the method private. - [SNAP-2457] Apply plan caching only for JDBC queries (TIBCOSoftware#1119) - updating tests failing with plan caching disabled - changes in test frameworks to run tests with a random value of plan caching - removing planCachingAll property as its behavior was misleading and not required after keeping plan caching default value to false.
Configuration menu - View commit details
-
Copy full SHA for f4036f4 - Browse repository at this point
Copy the full SHA f4036f4View commit details
Commits on Feb 6, 2019
-
Docv1.0.3 temp (TIBCOSoftware#1257)
* Include spark.context-settings in List of properties + Add respective properties for Leads, Locators and Servers configuration. + Checked consistency, language, grammar etc in the topic. * Incorporated corrections for SWAP file as suggested by Trilok. * Change the percentage for -critical-heap-percentage and -eviction-heap-percentage. * Update Create table section for compression, Buckets, and Redundancy * Change odbc installer version. * Included section about auto-configuring off-heap.
Configuration menu - View commit details
-
Copy full SHA for 4db068d - Browse repository at this point
Copy the full SHA 4db068dView commit details -
Lizy committed
Feb 6, 2019 Configuration menu - View commit details
-
Copy full SHA for 2965603 - Browse repository at this point
Copy the full SHA 2965603View commit details -
Fix for SNAP-2887. Checking for primary key columns also to identify … (
TIBCOSoftware#1252) * Fix for SNAP-2887. Checking for primary key columns also to identify handled filters
Configuration menu - View commit details
-
Copy full SHA for ec944c8 - Browse repository at this point
Copy the full SHA ec944c8View commit details
Commits on Feb 8, 2019
-
A --config directory can be passed to the snappy-start-all script (TI…
…BCOSoftware#1258) * A --config directory can be passed to the snappy-start-all script now to take config files from that folder instead of default conf folder. Please note log4j.properties file is still taken from the default conf folder. Will be fixed as part of SNAP-2911
kneeraj authoredFeb 8, 2019 Configuration menu - View commit details
-
Copy full SHA for 5911532 - Browse repository at this point
Copy the full SHA 5911532View commit details -
Snap 2237 (TIBCOSoftware#1253)
* added bug test for SNAP-2237 * Fixing SNAP-2237 by putting a resolution rule in SnappySessionState & AQPSessionState
Configuration menu - View commit details
-
Copy full SHA for c0274fc - Browse repository at this point
Copy the full SHA c0274fcView commit details -
Fix for SNAP-2368, handling the case when SnappyDataBaseDialect is us… (
TIBCOSoftware#1255) * Fix for SNAP-2368, handling the case when SnappyDataBaseDialect is used to determine table schema with the table name not containing schema name * enhanced bug test * added enhanced test for SNAP-2368. Added ignored bug test for SNAP-2901
Configuration menu - View commit details
-
Copy full SHA for 052a8f5 - Browse repository at this point
Copy the full SHA 052a8f5View commit details
Commits on Feb 11, 2019
-
Configuration menu - View commit details
-
Copy full SHA for b54f6ad - Browse repository at this point
Copy the full SHA b54f6adView commit details
Commits on Feb 12, 2019
-
added bug test for SNAP-2827 (TIBCOSoftware#1254)
* added bug test for SNAP-2827 * enhanced the bug test for SNAP-2827
Configuration menu - View commit details
-
Copy full SHA for e84f8f8 - Browse repository at this point
Copy the full SHA e84f8f8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 27439b2 - Browse repository at this point
Copy the full SHA 27439b2View commit details
Commits on Feb 13, 2019
-
Configuration menu - View commit details
-
Copy full SHA for da79990 - Browse repository at this point
Copy the full SHA da79990View commit details -
Branch 1.0.2.2 (TIBCOSoftware#1231)
* Fix for JDBC driver Jar running with Spark 2.3 + versions. ( java.lang.NoSuchFieldError: MAX_ROUNDED_ARRAY_LENGTH ) The Snappy JDBC driver jar contains a class org.apache.spark.unsafe.array.ByteArrayMethods which has been changed in Spark 2.3.1. When Snappy JDBC driver jar dependency is used, this class is loaded from the JDBC jar instead of Spark 2.3.1 jars (or later Spark version) and the program errors out as it expects a newer version of the class. We fixed it by relocating the org.apache.spark.unsafe to io.snappydata.org.apache.spark.unsafe in the build.gradle for the JDBC module.
Configuration menu - View commit details
-
Copy full SHA for 568b7e1 - Browse repository at this point
Copy the full SHA 568b7e1View commit details
Commits on Feb 20, 2019
-
Spotfire Apache Spark compatibility changes
- SHOW DATABASES as an alias for SHOW SCHEMAS - support in SnappySqlParser for spark.sql.variable.substitute to substitute ${var} in query string
Piyush Bisen committedFeb 20, 2019 Configuration menu - View commit details
-
Copy full SHA for 7607064 - Browse repository at this point
Copy the full SHA 7607064View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1a69600 - Browse repository at this point
Copy the full SHA 1a69600View commit details
Commits on Feb 21, 2019
-
Fixing the index of the byte used for setting & retrieveing the proje… (
TIBCOSoftware#1250) * Fixing the index of the byte used for setting & retrieveing the projection bit set. Fixes SNAP-2890
Configuration menu - View commit details
-
Copy full SHA for 47cb4ed - Browse repository at this point
Copy the full SHA 47cb4edView commit details
Commits on Feb 22, 2019
-
Snap 2760 (TIBCOSoftware#1234)
* changes for getting ldap groups for a user. Added a sql function CURRENT_USER_LDAP_GROUPS which returns an array of ldap group names, to which the user belongs to.
Configuration menu - View commit details
-
Copy full SHA for 9fe6b0b - Browse repository at this point
Copy the full SHA 9fe6b0bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 760d2df - Browse repository at this point
Copy the full SHA 760d2dfView commit details -
Snap 2900 (TIBCOSoftware#1259)
* Code changes for SNAP-2900 : - Adding a HTML changes for table column for holding control button to expand and collapse row in members list table. - Adding expand/collapse control to expand and collapse all rows in the table in one click. - Fixing column width issue.
Configuration menu - View commit details
-
Copy full SHA for f6a5b72 - Browse repository at this point
Copy the full SHA f6a5b72View commit details -
Configuration menu - View commit details
-
Copy full SHA for 01678c0 - Browse repository at this point
Copy the full SHA 01678c0View commit details -
Configuration menu - View commit details
-
Copy full SHA for fe84a02 - Browse repository at this point
Copy the full SHA fe84a02View commit details
Commits on Feb 27, 2019
-
SNAP-2931 - counting each event type separately and performing only r…
…equired operations(TIBCOSoftware#1265) Earlier we were counting only delete and update events and skipping both the operations if no update or delete events exists. Insert job was submitted irrespective of the presence of insert events. With this change, we are counting events for all event types and performing the operation only if respective events are present in the incoming ba
Configuration menu - View commit details
-
Copy full SHA for 8de3c13 - Browse repository at this point
Copy the full SHA 8de3c13View commit details
Commits on Mar 14, 2019
-
Added test for batch dml operation using prepared statement (TIBCOSof…
…tware#1235) * Added test for batch dml operation using prepared statement
Configuration menu - View commit details
-
Copy full SHA for b3b0783 - Browse repository at this point
Copy the full SHA b3b0783View commit details
Commits on Mar 20, 2019
-
Fixes for SNAP-2889, SNAP-2913 and other Miscellaneous fixes (TIBCOSo…
…ftware#1249) ## SNAP-2889 Accepting sink state table schema name as a sink option. This also enables the client to control the security aspects of the sink state table. New property stateTableSchema is exposed as part of these changes. The client code is supposed to provide the schema of the sink state table as a value of this property. When security is enabled, the stateTableSchema is a mandatory option. The current user running the job must have the necessary permission on the specified schema in order to run a streaming job. When security is not enabled, the above option is optional and defaults to "APP" schema. ## SNAP-2913 Removing extraneous streamQueryId property used to uniquely identify stream queries running across the cluster. The same property was used to maintain the streaming query state inside snappy's stream state table. Instead of streamQueryId, queryName property (specified by DataStreamWriter#queryName) is used to serve the same purpose now. ## Miscellaneous - Moving snappy sink related constants inside SnappySinkCallback.scala and keeping their scope private to org.apache.spark.sql.streaming package. - Unpersisting the persisted dataframe batch
Configuration menu - View commit details
-
Copy full SHA for a109243 - Browse repository at this point
Copy the full SHA a109243View commit details -
Tempdocv1021 (TIBCOSoftware#1272)
* Recommendations in Best Practices * Updates for JDBC Driver section * API section updated for the corresponding returns. * New topic: Enable SSL encryption for all socket endpoints in SnappyData cluster * Related links for the SSL topic * Edited properties for SSL specific Spark properties * Incorporated review comments from Swati and Shirish for Enabling SSL topic and Recommendations for Best Practices.
Configuration menu - View commit details
-
Copy full SHA for ace0a7c - Browse repository at this point
Copy the full SHA ace0a7cView commit details -
Corrected broken links. (TIBCOSoftware#1274)
* Corrected broken links. * minor edits.
Configuration menu - View commit details
-
Copy full SHA for 1698bac - Browse repository at this point
Copy the full SHA 1698bacView commit details
Commits on Mar 27, 2019
-
Fixes for limit query (TIBCOSoftware#1276)
- Limit query on external tables resulted in scan of all partitions when result could be returned by scanning just 1 partition - Excpetion "parkSQLExecuteImpl: getPartitionData() block rdd_30_0 not found" was thrown for multi-partition limit query
Configuration menu - View commit details
-
Copy full SHA for dd590a2 - Browse repository at this point
Copy the full SHA dd590a2View commit details
Commits on Mar 29, 2019
-
Vatsal Mevada committed
Mar 29, 2019 Configuration menu - View commit details
-
Copy full SHA for d727fef - Browse repository at this point
Copy the full SHA d727fefView commit details
Commits on Apr 4, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 6da24be - Browse repository at this point
Copy the full SHA 6da24beView commit details -
[SNAP-2956] Changes to wrap non-fatal OOME from Spark layer in LowMem…
…oryException (TIBCOSoftware#1277)
Configuration menu - View commit details
-
Copy full SHA for e6c73e4 - Browse repository at this point
Copy the full SHA e6c73e4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9654cc2 - Browse repository at this point
Copy the full SHA 9654cc2View commit details
Commits on Apr 8, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 8890e7d - Browse repository at this point
Copy the full SHA 8890e7dView commit details -
Snap 2474 | Prune partition of row table (TIBCOSoftware#1273)
Partition pruning support added for the the Row Table scan. Previously the partition pruning was working on the Column Table scan only. Similarly added it for the Row Table Scan. Also added supporting Unit and DUnit test cases.
Configuration menu - View commit details
-
Copy full SHA for 956f724 - Browse repository at this point
Copy the full SHA 956f724View commit details -
Code changes for SNAP-2965: (TIBCOSoftware#1280)
- Adding disk store UUID and disk store name into members summary.
Configuration menu - View commit details
-
Copy full SHA for 58583d3 - Browse repository at this point
Copy the full SHA 58583d3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 630d805 - Browse repository at this point
Copy the full SHA 630d805View commit details
Commits on Apr 9, 2019
-
Changes for SNAP-2860 (TIBCOSoftware#1263)
- Added fix with the help of sumedh. - Added test.
Configuration menu - View commit details
-
Copy full SHA for f15975c - Browse repository at this point
Copy the full SHA f15975cView commit details -
Use lower-case for listDatabases/listTables for Spark compatibility (T…
…IBCOSoftware#1271) - use Spark convention to return Catalog listDatabases/listTables in lower-case to work better with some external tools (e.g. Spotfire) - updated tests to deal with this change - fixed precheckin failures
Configuration menu - View commit details
-
Copy full SHA for 41594f5 - Browse repository at this point
Copy the full SHA 41594f5View commit details
Commits on Apr 10, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 0d57286 - Browse repository at this point
Copy the full SHA 0d57286View commit details
Commits on Apr 11, 2019
-
Hydra test Coverage for stability test and patients schema (TIBCOSoft…
…ware#1287) - Added test to run the concurrent queries in the cluster loaded using various schemas (reviews, patients, NYCTaxi, airline, TPCH) with more than 1 TB data. - Added queries for column tables, external tables and join between column tables and external tables for tables in all the schemas mentioned above - Disabled snappyHashAggregate while running the analytical queries in the stability test - Added a script to run the stability test - Added hydra(column table) test for spd schema - Added spark app for SPVA tables creation and data loading - Added validation of all queries in SPVA schema against the spark
Configuration menu - View commit details
-
Copy full SHA for 727428a - Browse repository at this point
Copy the full SHA 727428aView commit details -
Configuration menu - View commit details
-
Copy full SHA for eb4d919 - Browse repository at this point
Copy the full SHA eb4d919View commit details
Commits on Apr 12, 2019
-
Update Spark and store module versions
Sumedh Wale committedApr 12, 2019 Configuration menu - View commit details
-
Copy full SHA for e5a81e2 - Browse repository at this point
Copy the full SHA e5a81e2View commit details -
generating SparkR library along with snappy product (TIBCOSoftware#1286)
Generating SparkR library along with snappy product. Note that building SparkR library required R installed on the build machine. By default ./gradlew product will not build SparkR library and it will not be included as part of snappy's distribution. sparkR library can be included as part of product distribution by passing R.enable build property.
Configuration menu - View commit details
-
Copy full SHA for ce1874e - Browse repository at this point
Copy the full SHA ce1874eView commit details -
[SNAP-2975] fix SEGV in putInto (TIBCOSoftware#1288)
Output projection in putInto scans may not have the extra columns used for update/delete. Specifically since putInto plan uses/combines same plan objects for join/anti-join one of the scans has the additional columns but does not get fed directly to ColumnUpdateExec rather is used only in the join, so output projection does not have those extra columns but the relation's schema (created from ColumnFormatRelation) has them. - use relation schema to determine put/update case rather than output projection in ColumnTableScan - removed duplicate methods across SharedUtils and Utils
Sumedh Wale authoredApr 12, 2019 Configuration menu - View commit details
-
Copy full SHA for 1bd4c2d - Browse repository at this point
Copy the full SHA 1bd4c2dView commit details -
Cluster stability MALLOC settings and SNAP-2959 (TIBCOSoftware#1279)
* Changed he weight of lead node to 17 It adds up to 27 which is slightly less than two server + 1 locators weight. In this case, in case of network partition if 3 or more servers are on one side then lead node will be shutdown.
suranjan kumar authoredApr 12, 2019 Configuration menu - View commit details
-
Copy full SHA for 1942811 - Browse repository at this point
Copy the full SHA 1942811View commit details
Commits on Apr 13, 2019
-
Neeraj Kumar committed
Apr 13, 2019 Configuration menu - View commit details
-
Copy full SHA for 19856e6 - Browse repository at this point
Copy the full SHA 19856e6View commit details
Commits on Apr 15, 2019
-
Quote table name in some commands before sending to GemXD (TIBCOSoftw…
…are#1289) Quote table and schema name in some commands before executing on GemFireXD connection. This allows support for reserved keywords in GemFireXD parser like "default" as schema name. ## Changes proposed in this pull request - quote table name and convert the upper-case in INSERT/PUT and ALTER TABLE before executing on GemFireXD connection - added unit test for "default" schema in BugTest using session and JDBC connection
Sumedh Wale authoredApr 15, 2019 Configuration menu - View commit details
-
Copy full SHA for 5fade0b - Browse repository at this point
Copy the full SHA 5fade0bView commit details -
fix an occasional exception string match failure
Sumedh Wale committedApr 15, 2019 Configuration menu - View commit details
-
Copy full SHA for 27de95b - Browse repository at this point
Copy the full SHA 27de95bView commit details
Commits on Apr 18, 2019
-
Suranjan Kumar committed
Apr 18, 2019 Configuration menu - View commit details
-
Copy full SHA for 08cd95c - Browse repository at this point
Copy the full SHA 08cd95cView commit details -
Changes for SNAP-2974: Snappy UI rebranding to TIBCO ComputeDB (TIBCO…
…Software#1290) 1. Adding Product Edition type in version information 2. Changing App Name from SnappyData to TIBCO ComputeDB 3. Renaming pages to just Dashboard, Member Details and Jobs 4. Removing or Changing user visible SnappyData references on UI to TIBCO ComputeDB.
Configuration menu - View commit details
-
Copy full SHA for 6edadfa - Browse repository at this point
Copy the full SHA 6edadfaView commit details -
Configuration menu - View commit details
-
Copy full SHA for ba0c414 - Browse repository at this point
Copy the full SHA ba0c414View commit details
Commits on Apr 19, 2019
-
SNAP-2902 Mismatch in the expected and actual inserted rows (TIBCOSof…
…tware#1292) Resolved an issue where iteration over remote entries consistently misses one ColumnBatch per 1000 (which is the size of a batch of keys fetched in RemoteEntriesIterator).
Configuration menu - View commit details
-
Copy full SHA for 29ebf9e - Browse repository at this point
Copy the full SHA 29ebf9eView commit details -
Fix for SNAP-2982 (TIBCOSoftware#1293)
Ensure StoreHiveCatalog is initialized before getting a handle to Hive client in SnappySharedState.
Configuration menu - View commit details
-
Copy full SHA for 51ac4b9 - Browse repository at this point
Copy the full SHA 51ac4b9View commit details -
* Versioning related changes for upcoming 1.1.0 release. (TIBCOSoftwa…
…re#1291) * Versioning related changes for upcoming 1.1.0 release.
Configuration menu - View commit details
-
Copy full SHA for 8f2a173 - Browse repository at this point
Copy the full SHA 8f2a173View commit details
Commits on Apr 24, 2019
-
Start hive-thriftserver by default in background (TIBCOSoftware#1297)
- changed default of hive-thriftserver to true - to reduce startup time, the default behaviour is to launch the thrift server in background with startup message stating: Starting ... (without host/port information) - if snappydata.hiveServer.enabled=true is explicitly provided then the hive thriftserver is launched in foreground with proper host/port information like before - correct deps of encoders to exclude spark from connector shadowJar - removing non-existent junit dependency in compatibilityTests - fixing failures in split mode tests: correct jdbc jar name in tests and docs - update jar names in docs
Sumedh Wale authoredApr 24, 2019 Configuration menu - View commit details
-
Copy full SHA for f7ecc73 - Browse repository at this point
Copy the full SHA f7ecc73View commit details -
Worakaround fix for SNAP-2440. (TIBCOSoftware#1298)
* Worakaround fix for SNAP-2440. For single table this workaround is good. But for tables with same name but in different schema it will have issues. Will be revisited as part of SNAP-2442
kneeraj authoredApr 24, 2019 Configuration menu - View commit details
-
Copy full SHA for 714bd2d - Browse repository at this point
Copy the full SHA 714bd2dView commit details
Commits on Apr 25, 2019
-
Fix buffer reference count for couple of cases (TIBCOSoftware#1300)
- release compression buffer if compression has been determined to be ineffective - release positions array buffer created by ColumnDeltaEncoder
Sumedh Wale authoredApr 25, 2019 Configuration menu - View commit details
-
Copy full SHA for 127404f - Browse repository at this point
Copy the full SHA 127404fView commit details -
Revert to older approach for SNAP-2440 (TIBCOSoftware#1301)
- the parser approach to trim off schema does not work for nested aliases (that worked originally without the hack for SNAP-2440) like: SELECT tmp.t.* FROM (SELECT * AS t from test) tmp - reverting to older approach in TIBCOSoftware#1298 to catch exception by name and resolve
Sumedh Wale authoredApr 25, 2019 Configuration menu - View commit details
-
Copy full SHA for af54297 - Browse repository at this point
Copy the full SHA af54297View commit details -
Configuration menu - View commit details
-
Copy full SHA for cc2acdc - Browse repository at this point
Copy the full SHA cc2acdcView commit details
Commits on Apr 29, 2019
-
Code changes for SNAP-2989: Snappy UI rebranding to Tibco ComputeDB i…
…ff it's Enterprise Edition (TIBCOSoftware#1299) * Code changes for SNAP-2989: 1. Setting product name to "SnappyData" if it is community edition else "TIBCO ComputeDB". 2. Loading GFXD version properties to identify and populate correct product edition type.
Configuration menu - View commit details
-
Copy full SHA for d54cff3 - Browse repository at this point
Copy the full SHA d54cff3View commit details -
Configuration menu - View commit details
-
Copy full SHA for fd3aa9d - Browse repository at this point
Copy the full SHA fd3aa9dView commit details
Commits on Apr 30, 2019
-
Avoid empty parenthesis for unhandled filter (TIBCOSoftware#1303)
* Append 'FALSE' or 'TRUE' in case of unhandled child filters of OR or AND filter respectively.
suranjan kumar authoredApr 30, 2019 Configuration menu - View commit details
-
Copy full SHA for 96fe3e3 - Browse repository at this point
Copy the full SHA 96fe3e3View commit details
Commits on May 3, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 8026acd - Browse repository at this point
Copy the full SHA 8026acdView commit details
Commits on May 4, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 6229863 - Browse repository at this point
Copy the full SHA 6229863View commit details
Commits on May 7, 2019
-
Version updates 1.1.0 (TIBCOSoftware#1304)
* Updated some metainfo in prep for 1.1.0 release. * Set AWS instance's public IP instead of public hostname. * This will avoid issues if the instance is launched within a VPC which is not configured with enableDnsHostnames attribute set to true (SNAP-2854).
Configuration menu - View commit details
-
Copy full SHA for 6fb800d - Browse repository at this point
Copy the full SHA 6fb800dView commit details -
Make SnappyConf key name search case-insensitive (TIBCOSoftware#1306)
- search through all pre-defined key names case-insensitively for set/unset of SQLConf properties - added check for planCaching property for the same
Sumedh Wale authoredMay 7, 2019 Configuration menu - View commit details
-
Copy full SHA for 3bfefde - Browse repository at this point
Copy the full SHA 3bfefdeView commit details -
Sumedh Wale committed
May 7, 2019 Configuration menu - View commit details
-
Copy full SHA for 3484fb3 - Browse repository at this point
Copy the full SHA 3484fb3View commit details
Commits on May 9, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 1622d02 - Browse repository at this point
Copy the full SHA 1622d02View commit details
Commits on May 10, 2019
-
* Updated docs with upcoming release version 1.1.0
* Select product name depending upon enterprise flag, to be printed for 'snappy version' command. * Discussed with @kneeraj * Link to the latest store commit.
Amogh Shetkar committedMay 10, 2019 Configuration menu - View commit details
-
Copy full SHA for 1ed4dc2 - Browse repository at this point
Copy the full SHA 1ed4dc2View commit details
Commits on May 15, 2019
-
Community docv1.1.0 (TIBCOSoftware#1314)
* Adding the images for SnappyData Community Edition * Adding new files for "How to connect TIBCO Spotfire Desktop to TIBCO ComputeDB" * Error message documentation for "SmartConnector catalog is not up to date. Please reconstruct the Dataset and retry the operation" in the Troubleshooting guide. * Updating the community version SnappyData with the changes for 1.1.0 and for name change of SnappyData Pulse. * Minor edits to community version.(Removing cloudbuilder section from AWS) and minor edit in YML * Added spark.sql.files.maxPartitionBytes Added spark.sql.files.maxPartitionBytes in Community Edition * Introduction to TIBCO ComputeDB as Ent edition Introduction to TIBCO ComputeDB as Ent edition as suggested by Greg. * removing redundant lines (community edition) * edit typo (community edition) * Update important_settings.md Add swap space recommendation (Community edition) * minor edit * getKeyColumnsAndPositions API added plus note Based on inputs from Vatsal, changes to the API section (community edition): - Removed getTableType API - added getKeyColumnsAndPositions API - Added note "This API is not supported in the Smart Connector mode." to createSampleTable, createApproxTSTopK, queryApproxTSTopK APIs * Incorporate review comments. * * Docs change: Updated versions for 1.1.0 release (TIBCOSoftware#1311) Review comments will be taken up in separate PRs. * Resolve Conflicts * Incorporating review comments in the Community edition * Minor edits * Incorporated review comments Community Version * Changes to yml file. * Updates to Upgrade section, load-balance. Correction to a link in getting started on kubernetes topic. * Add release notes PDF and archive the old release notes. * Add link to the release notes pdf from the release notes page. * Adding link to SnappyData Documentation 1.0.2.1 in the Doc Archives. * Minor edit to Doc Archives. Changes to License Model as suggested by Amogh * Changed to SnappyData from TIBCO ComputeDB * Changed instructions for community edition. * Removed Known Issues link
Configuration menu - View commit details
-
Copy full SHA for 92c861d - Browse repository at this point
Copy the full SHA 92c861dView commit details
Commits on May 22, 2019
-
Snap 2986 : Refactor scripts for TPC-H smart connector and TPC-DS (TI…
…BCOSoftware#1295) ** SNAP:2986 Refactor scripts for TPC-H smart connector and TPC-DS * Adding result generation script for smart connector mode * Minor path corrections in results generation for smart connector * Delete PerfRun.conf , only the template should be checked in. * set properties on snappysession and minor updates to config * Insert tables in the same order that is followed for embedded cluster for ease of report generation * script cleanup and additional parameters for table creation in smart connector mode * minor corrections in smart connector scripts * echo the new additional args too in smart connector test * Fixed a typo in results generation script for Spark * include SnappyData conf files in results generation
Configuration menu - View commit details
-
Copy full SHA for ffe6b0d - Browse repository at this point
Copy the full SHA ffe6b0dView commit details
Commits on May 23, 2019
-
Community docv1.1.0 (TIBCOSoftware#1318)
Added trademark signs in Getting Started and index Updated Troubleshooting section (in community edition) for changes suggested by Vatsal and Shirish. Edited the SnappyData name with Trademarks in YML.
Configuration menu - View commit details
-
Copy full SHA for 9a16a5d - Browse repository at this point
Copy the full SHA 9a16a5dView commit details
Commits on May 27, 2019
-
Fix of bug SNAP-2644 (TIBCOSoftware#1320)
* Instead of blindly skipping PromoteString rule try it first and then skip it if necessary.
kneeraj authoredMay 27, 2019 Configuration menu - View commit details
-
Copy full SHA for 8284c98 - Browse repository at this point
Copy the full SHA 8284c98View commit details -
Community docv1.1.0 (TIBCOSoftware#1322)
Minor changes to Install section. Remove instances of -rebalance Added images in how-to topic.
Configuration menu - View commit details
-
Copy full SHA for 427ab42 - Browse repository at this point
Copy the full SHA 427ab42View commit details
Commits on May 28, 2019
-
* Updated version for hotfix release off 1.1.0 release.
* The version is not as per semver, but we're trying to match to the existing norms in release process.
Amogh Shetkar committedMay 28, 2019 Configuration menu - View commit details
-
Copy full SHA for 0150db4 - Browse repository at this point
Copy the full SHA 0150db4View commit details
Commits on Jun 3, 2019
-
- Provide schema to spark temp table. (TIBCOSoftware#1315)
* - Provide schema to spark temp table for ct schema to avoid data mismatch issues.
Sonal Agarwal authoredJun 3, 2019 Configuration menu - View commit details
-
Copy full SHA for 572b715 - Browse repository at this point
Copy the full SHA 572b715View commit details
Commits on Jun 5, 2019
-
Spark compatibility fixes (TIBCOSoftware#1310)
Enabling nearly all disabled Spark compatibility tests and accompanying product fixes for those. ## Changes proposed in this pull request Primary product fixes include: - Use lower-case schema/table/column names from Spark layer (TIBCOSoftware#1323) This changes the names used for schema/table/column identifiers to be lower-case in Snappy catalog from upper-case to make it compatible with Spark convention. - change SnappyData layer to use lower-case names like Spark - product and test fixes for lower-case names - restrict SnappyHashAggregateExec if output expressions contain non-code generated expressions - process string escape characters using ParserUtils.unescapeSQLString - fixed EncodeScanExec to use proper java class type name for declared variables - add execution memory metrics for ObjectHashSet usage - add support for TABLESAMPLE clause in queries - expanded CTES support for multi-line DMLs (WITH ... FROM ... INSERT ... INSERT ...) - support for PARTITIONED BY and CLUSTERED BY clauses in CREATE TABLE for external tables - add support for a number of missing ALTER TABLE commands that change various properties of external tables - handle the case of NPE in hive catalog lookup during concurrent DROP - add support for CREATE TABLE ... LIKE - add support for ANALYZE TABLE command - add support for DESCRIBE FORMATTED table - add support for PURGE clause in DROP TABLE - fix tokenization skip for NAMED_STRUCT - allow table identifiers to start with a digit - make a number of keywords to be non-reserved or weaker from reserved - clear executionId before SparkPlan.execute in SnappySession.planExecution since former can lead to actual execution for DMLs/DDLs and thus lead to nested executionId Other test fixes: - include resource files by path and not as part of jar since some tests expect normal file URLs for those - add dummy CameraInput/CameraOutput classes since some tests use those as hadoop Input/Output implementations for storage that the hive client tries to materialize before storing in catalog - force disable codegen fallback to ensure code generation is proper in all cases - updated semantics for couple of global temporary view tests where SnappyData allows accessing global temporary views without "global_temp." schema by design
Sumedh Wale authoredJun 5, 2019 Configuration menu - View commit details
-
Copy full SHA for cd0f20d - Browse repository at this point
Copy the full SHA cd0f20dView commit details
Commits on Jun 6, 2019
-
Change inbuilt sink table names to lower-case
Sumedh Wale committedJun 6, 2019 Configuration menu - View commit details
-
Copy full SHA for 4a16c3e - Browse repository at this point
Copy the full SHA 4a16c3eView commit details
Commits on Jun 10, 2019
-
Snap 2707 (TIBCOSoftware#1242)
* Added test for SNAP-2707 * Added fix and tests for SNAP-2707 * Modified code to fix SNAP-2707 * Added few modification in tests * Added few modifications in test * Added some test changes * Removed repeated test * Added some test changes * Incorporated review comments * Added minor changes * Changes for SNAP-2879 - Added change in parser code for put into values operation. * Changes for SNAP-2879 - Added support for put into values for column table. - Created dataframe for values and using putInto api for writing df to column table. - Added some test changes * Added some test changes * Changes for SNAP-2707 - Incorporated some of the review comments - Added test changes - Added changes for showing count on snappy shell after execution of put into query * Incorporated review comments * Minor changes * Added code for handling null values of put into query * Changed return type of putInto api to Long * Added fix for issue seen in prepared statement for put into values syntax of column table. Added test for the fix * Added minor change * Added some test changes
Configuration menu - View commit details
-
Copy full SHA for 2781e70 - Browse repository at this point
Copy the full SHA 2781e70View commit details
Commits on Jun 12, 2019
-
Swati Mahajan committed
Jun 12, 2019 Configuration menu - View commit details
-
Copy full SHA for 29515b5 - Browse repository at this point
Copy the full SHA 29515b5View commit details
Commits on Jun 17, 2019
-
Use full SQL text in job description
Spark job description already trims off the string (that can be expanded if required) so set the full SQL text as job description
Sumedh Wale committedJun 17, 2019 Configuration menu - View commit details
-
Copy full SHA for eb079c4 - Browse repository at this point
Copy the full SHA eb079c4View commit details
Commits on Jun 24, 2019
-
Snap 2878 (TIBCOSoftware#1245)
* Test Coverage for SQL functions * Unit test coverage for plan caching feature and cleanup related changes - The test should be run with plan caching on. Some of the sql function tests are failing with plan catching on. - Some test failures are not related to plan catching. [Seprate document has been created to track all these issues] * Added few more function tests * Added formatting and logging changes * Added test changes for comparing spark and snappy result
Configuration menu - View commit details
-
Copy full SHA for 457e02b - Browse repository at this point
Copy the full SHA 457e02bView commit details -
Fix UTF8 serialization failure with large SQL strings
Issue spotted by Vatsal in eb079c4 due to full query string being used as Spark property that is serialized as a UTF8 string. Now trim off the full query string at 10K.
Configuration menu - View commit details
-
Copy full SHA for d58770a - Browse repository at this point
Copy the full SHA d58770aView commit details
Commits on Jun 26, 2019
-
Fix for SNAP-3022 - Resetting commonStatementAttributes for thrift (T…
…IBCOSoftware#1321) connection as part of task completion listener making sure it will be always reset.
vatsal mevada authoredJun 26, 2019 Configuration menu - View commit details
-
Copy full SHA for 0f52771 - Browse repository at this point
Copy the full SHA 0f52771View commit details -
- Adding test cases scenario for SNAP-3028 (TIBCOSoftware#1326)
* - Adding test cases scenario for snap 3028 * - Adding test cases for snap 2269 and snap 2762. * - Adding automated test case for snap 3007. * - Code refactoring and incorporating review comments
Sonal Agarwal authoredJun 26, 2019 Configuration menu - View commit details
-
Copy full SHA for 5cad326 - Browse repository at this point
Copy the full SHA 5cad326View commit details -
Configuration menu - View commit details
-
Copy full SHA for b351418 - Browse repository at this point
Copy the full SHA b351418View commit details
Commits on Jun 28, 2019
-
[SNAP-2052] Skip implicit cast for update (TIBCOSoftware#1270)
## Throwing analysis exception for update operation in the following cases - Binary arithmetic operation is performed before the assignment and one of the operands is a string type - Trying to assign a value of some different type which doesn't match the column's data type. Although the assignment is allowed in the following cases even if the data type doesn't match: - assigning null value - assigning narrower decimal to a wider decimal - assigning narrower numeric type to wider numeric type as far as precision is not compromised - assigning of narrower numeric types to decimal type - assigning expression of any data type to a string type column These changes were required as spark performs fail-safe implicit type casting which can lead to target column(s) getting updated with NULL if the cast fails (e.g. when the string value is not a number). User can explicitly cast the value to match the types appropriately. However, it is important to make sure that the values being cast is of an appropriate type otherwise the cast will again result into NULL. For example, following statement may lead to `id` column being populated with `NULL` for all the records: `update table set id = id + cast('abc' as int);` where `id` is an `int` column and string literal `'abc'` is being cast to `int` which will result into `NULL`. ## Added new implementation of Analyzer namely "SnappyAnalyzer" For above mentioned changes we had to add new implementation of Analyzer namely `SnappyAnalyzer` which contains a separate copy of rule batches available in upstream `Analyzer`. This change was required because keeping a dummy instance of analyzer only to fetch the list of rule batches in order to inject `StringPromotionCheckForUpdate` rule (introduced to cater above mentioned Analysis exception) was leading to issues as `ResolveSubquery` rule end up using the dummy analyzer instance internally. Also it is not possible to access the batches from super reference as batches is declared as a lazy val in upstream `Analyzer`. A test scenario is added to compare the duplicate rule list from `SnappyAnalyzer` with the rule list available in upstream `Analyzer`. This test should bubble up any future changes in spark's `Analyzer` if they are not reflected in `SnappyAnalyzer`.
vatsal mevada authoredJun 28, 2019 Configuration menu - View commit details
-
Copy full SHA for bf3cb15 - Browse repository at this point
Copy the full SHA bf3cb15View commit details
Commits on Jul 1, 2019
-
- Support for providing multi-line config properties for server, lead… (
TIBCOSoftware#1312) * - Support for providing multi-line config properties for server, lead and locator.
Sonal Agarwal authoredJul 1, 2019 Configuration menu - View commit details
-
Copy full SHA for e915052 - Browse repository at this point
Copy the full SHA e915052View commit details -
Configuration menu - View commit details
-
Copy full SHA for c95ae2a - Browse repository at this point
Copy the full SHA c95ae2aView commit details -
Prepared stmt on views gives error [SNAP-2254] (TIBCOSoftware#1334)
* Prepared stmt on views gives error [SNAP-2254] (TIBCOSoftware#1334). Also added a test for SNAP-3007
Configuration menu - View commit details
-
Copy full SHA for 9869da0 - Browse repository at this point
Copy the full SHA 9869da0View commit details
Commits on Jul 2, 2019
-
Snap 1325 (TIBCOSoftware#1266)
- Added the test cases in Java for UDF1 to UDF22 using org.apache.spark.sql.api.java.* [* -> UDF1 to UDF22] - Added the test cases in Scala for UDF1 to UDF22 using org.apache.spark.sql.api.java.* [* -> UDF1 to UDF22] - Added the validation using assert statements.
Configuration menu - View commit details
-
Copy full SHA for e596f34 - Browse repository at this point
Copy the full SHA e596f34View commit details
Commits on Jul 3, 2019
-
[Snap-3065] Addressing issue in prepared statement with subquery (TIB…
…COSoftware#1337) * Added a new rule SnappyPromoteStrings to be applied before Spark's PromoteStings as the PromoteStrings rule causes issues in prepared statements by replacing ParamLiteral with NULL in case of BinaryComparison with left node being StringType and right being ParamLiteral (or vice-versa) * Removed redundent analyzerWithoutPromote object * Added a dunit test that runs TPCH queries as prepared statement
Configuration menu - View commit details
-
Copy full SHA for 3474c0b - Browse repository at this point
Copy the full SHA 3474c0bView commit details -
Snap 2927 (TIBCOSoftware#1267)
* Added tests for deploy package feature to deploy cassandra spark connector - Test for passing --packages option with snappy-job.sh submit script - Test for deploy package command to deploy cassandra spark connector maven coordinate and create external table from cassandra table
Configuration menu - View commit details
-
Copy full SHA for 7c5cd87 - Browse repository at this point
Copy the full SHA 7c5cd87View commit details -
examples module can be used as independent gradle project and extende… (
TIBCOSoftware#1332) * examples module can be used as independent gradle project and extended further as is. * Moved SnappyTestRunner from snappy-core to snappy-examples * Removed snappy-core dependency from examples/build.gradle file. * Added snappy-examples dependency to core/build.gradle and dtests/build.gradle * Added README and a snappydata job template to examples.
Configuration menu - View commit details
-
Copy full SHA for 3463309 - Browse repository at this point
Copy the full SHA 3463309View commit details -
Fix other issues with decimal precision and modifying scripts (TIBCOS…
…oftware#1333) * - Handling decimal column mismatch fixes for ct schema. * - Fixing decimal issues in sql files.
Sonal Agarwal authoredJul 3, 2019 Configuration menu - View commit details
-
Copy full SHA for cd08b26 - Browse repository at this point
Copy the full SHA cd08b26View commit details -
Hydra Test coverage for external tables (TIBCOSoftware#1222)
- Added the test related to External tables. - Create the external tables through snappy API, and run the queries through dataframe API. - Create the external tables through SQL statements and run the queries.
Configuration menu - View commit details
-
Copy full SHA for f02a62c - Browse repository at this point
Copy the full SHA f02a62cView commit details -
Stability test enhancement to make it more generic in terms of accept…
…ing any schema dynamically (TIBCOSoftware#1338) - Added support to create row and column tables based on the table type provided through a configuration parameter - Added support to dynamically pass the options required while creating a row/column table - Modified test to create and load TPCH tables - Reduced the number of threads from 10 to 3 required for running analytical queries concurrently - Modified test to continue the concurrent analytical query execution even in case of TimeoutException: Futures timed out - Added support to use the last element in the vector for the missing type for a table - Modified the test to have separate confs for creating and loading tables and running concurrent ad-hoc analytical queries
Configuration menu - View commit details
-
Copy full SHA for 33fcaff - Browse repository at this point
Copy the full SHA 33fcaffView commit details
Commits on Jul 4, 2019
-
- Synchronizing with latest store.
sonal committedJul 4, 2019 Configuration menu - View commit details
-
Copy full SHA for 8e0a533 - Browse repository at this point
Copy the full SHA 8e0a533View commit details -
Added a fix and test for SNAP-3024 (TIBCOSoftware#1340)
* Added a fix and test for SNAP-3024 * Retrying multiple times in case of exception due to stale catalog
Configuration menu - View commit details
-
Copy full SHA for de74cb5 - Browse repository at this point
Copy the full SHA de74cb5View commit details -
* Added tests for diskfull scenario with inserts for column,partitioned row and replicated row tables. * Added a test case with Delete operations in disk full scenario * Added validation code to check for data count before and after restart.
Supriya Pillai authoredJul 4, 2019 Configuration menu - View commit details
-
Copy full SHA for 1a157ab - Browse repository at this point
Copy the full SHA 1a157abView commit details
Commits on Jul 5, 2019
-
Hydra Test Coverage for Complex Data with ~1GB Data + Snappy Smart Co…
…nnector Mode (TIBCOSoftware#1256) - Hydra Test Coverage for Complex Data Type with ~1GB Data. - Hydra Test Coverage for Complex Data Type with smart connector mode and with Embedded mode. Data Types covered in - - ArrayType : SQL and API - MapType : SQL and API - StructType : SQL and API - Combination of above all 3 types in one table (ArrayType,MapType,StructType) - ArrayOfStructType - ArrayOfStringInMapAsValue
Configuration menu - View commit details
-
Copy full SHA for 574fcf0 - Browse repository at this point
Copy the full SHA 574fcf0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 360ccf6 - Browse repository at this point
Copy the full SHA 360ccf6View commit details -
[SNAP-3028] Considering jobserver class loader as a key for generated…
… code cache (TIBCOSoftware#1335) ## Considering jobserver class loader as a key for generated code cache For each submission of a snappy-job, a new URI class loader is used. The first run of a snappy-job may generate some code and it will be cached. The subsequent run of the snappy job will end up using the generated code which was cached by the first run of the job. This can lead to issues as the class loader used for the cached code is the one from the first job submission and subsequent submissions will be using a different class loader. This change is done to avoid such failures. ## SnappyJobTestSupport trait in test framework Extracted snappy job management related utility methods from `SplitClusterDUnitSecurityTest` to `SnappyJobTestSupport` trait. `SnappyJobTestSupport` should be extended by test classes testing scenarios related to snappy jobs.
vatsal mevada authoredJul 5, 2019 Configuration menu - View commit details
-
Copy full SHA for c1f8b1a - Browse repository at this point
Copy the full SHA c1f8b1aView commit details
Commits on Jul 8, 2019
-
Community docv1.1.0 (TIBCOSoftware#1345)
Added new section in troubleshooting error messages. Added AWS Assume role topic Structured streaming quickref Other minor edits
Configuration menu - View commit details
-
Copy full SHA for ac4eaa1 - Browse repository at this point
Copy the full SHA ac4eaa1View commit details
Commits on Jul 10, 2019
-
Vatsal Mevada committed
Jul 10, 2019 Configuration menu - View commit details
-
Copy full SHA for 837ca41 - Browse repository at this point
Copy the full SHA 837ca41View commit details -
Snap 2050 (TIBCOSoftware#1341)
* Added concurrency tests * Verifying return value of executeBatch
Configuration menu - View commit details
-
Copy full SHA for 1ed23ff - Browse repository at this point
Copy the full SHA 1ed23ffView commit details
Commits on Jul 11, 2019
-
Avoids setting currentDatabase as 'sys' in hive's SessionState. (TIBC…
…OSoftware#1349) This avoids the exception thrown when setting 'sys' as current schema via beeline.[SNAP-2972]
Configuration menu - View commit details
-
Copy full SHA for 34fe3ab - Browse repository at this point
Copy the full SHA 34fe3abView commit details -
Snap 2631 (TIBCOSoftware#1331)
* Added code in shell scripts to resolve NullPointerException in absence of -dir argument
Configuration menu - View commit details
-
Copy full SHA for 1d0dd48 - Browse repository at this point
Copy the full SHA 1d0dd48View commit details
Commits on Jul 12, 2019
-
Snap 2228 fix (TIBCOSoftware#1351)
* Tests and fix for SNAP-2228 Added check for schema version when insert/update/delete operation are done from smart connector mode. On schema version change, now CatalogStaleException will be thrown and cached catalog tables will be cleared so that schema is fetched from embedded side.
Configuration menu - View commit details
-
Copy full SHA for 0a86918 - Browse repository at this point
Copy the full SHA 0a86918View commit details -
Deploy command allows names with hyphens and dots [SNAP-2355] (TIBCOS…
…oftware#1347) * Added a new rule that allows '.' and '-' in package name. * Adding unit test for the ticket.
Configuration menu - View commit details
-
Copy full SHA for 35909a1 - Browse repository at this point
Copy the full SHA 35909a1View commit details -
Test changes to have explicit casting for small int column in updates. (
TIBCOSoftware#1353) * - Adding fix for SNAP-3072 * - Fixing NPE in the converted rowstore hydra tests.
Sonal Agarwal authoredJul 12, 2019 Configuration menu - View commit details
-
Copy full SHA for c5dbb76 - Browse repository at this point
Copy the full SHA c5dbb76View commit details -
Added code to copy the configuration files into other member of clust…
…er (TIBCOSoftware#1329) * Added code to copy the configuration files into other members of the cluster while starting the cluster and also for --nocopyconf argument to avoid the copy of configuration files i.e ./snappy-start-all [--skipconfcopy]
Configuration menu - View commit details
-
Copy full SHA for 09f3838 - Browse repository at this point
Copy the full SHA 09f3838View commit details
Commits on Jul 15, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 9b0af08 - Browse repository at this point
Copy the full SHA 9b0af08View commit details -
Configuration menu - View commit details
-
Copy full SHA for 23cb53c - Browse repository at this point
Copy the full SHA 23cb53cView commit details
Commits on Jul 16, 2019
-
Removed the SnappyData core dependency thereby keeping it consistent with the Documentation.
Configuration menu - View commit details
-
Copy full SHA for a8f273b - Browse repository at this point
Copy the full SHA a8f273bView commit details -
Improving the unit test for the [SNAP-2355] (TIBCOSoftware#1356)
* Undeploy the deployed packages after the test, as it was affecting other tests when run together.
Configuration menu - View commit details
-
Copy full SHA for 94b7ac5 - Browse repository at this point
Copy the full SHA 94b7ac5View commit details
Commits on Jul 17, 2019
-
Configuration menu - View commit details
-
Copy full SHA for f332373 - Browse repository at this point
Copy the full SHA f332373View commit details -
[SNAP-3023] Handle CatalogStaleException with retries in snappy sink (T…
…IBCOSoftware#1354) Retrying streaming batch processing for the snappy sink when `CatalogStaleException` is encountered. Currently, the number of attempts is set to 10 with backoff interval of `100 * attempt_number` millis. Also exposed an internal property (`internal___attempts`) to overwrite this for tests.
vatsal mevada authoredJul 17, 2019 Configuration menu - View commit details
-
Copy full SHA for 75be2e5 - Browse repository at this point
Copy the full SHA 75be2e5View commit details
Commits on Jul 18, 2019
-
Streaming test case for concurrent putinto. (TIBCOSoftware#1316)
Test to verify concurrent streaming putInto functionality . Where multiple streaming jobs are started and does mix of inserts and putinto operations on multiple tables.
Supriya Pillai authoredJul 18, 2019 Configuration menu - View commit details
-
Copy full SHA for e33f26c - Browse repository at this point
Copy the full SHA e33f26cView commit details
Commits on Jul 19, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 271f1f5 - Browse repository at this point
Copy the full SHA 271f1f5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 94a1ee9 - Browse repository at this point
Copy the full SHA 94a1ee9View commit details -
Hydra test coverage for Cluster upgrade (TIBCOSoftware#1355)
-Added scripts for cluster upgradation testing. The script will take the builds as argument and start cluster , perform dml operations close the cluster and validate the results.
Supriya Pillai authoredJul 19, 2019 Configuration menu - View commit details
-
Copy full SHA for 4ac2640 - Browse repository at this point
Copy the full SHA 4ac2640View commit details -
Snap 3023 - handling stale catalog exception while state table update (…
…TIBCOSoftware#1361) - Stale catalog exception can be encountered while updating sink state table also. Hence including state table updating query as part of retry block - Fixing test clean up
vatsal mevada authoredJul 19, 2019 Configuration menu - View commit details
-
Copy full SHA for 41d7ca5 - Browse repository at this point
Copy the full SHA 41d7ca5View commit details -
Linking the latest store with the root repository.
Neeraj Kumar committedJul 19, 2019 Configuration menu - View commit details
-
Copy full SHA for fe9e924 - Browse repository at this point
Copy the full SHA fe9e924View commit details
Commits on Jul 20, 2019
-
Add hydra test cases for DML operation with overflow table data (TIBC…
…OSoftware#1343) * - Adding tests for dmlOps with overflow, snap 3055, snap 2228. * - Adding method for having thread stack dumps and print summary for the count of different types of threads in the VM in a log file. * - Adding method for inserting using JDBC client using preparedStmt addBatch. * - Adding a fix for lead HA scenario, where the previous primary lead host-port were used instead of retrieving a new one.
Sonal Agarwal authoredJul 20, 2019 Configuration menu - View commit details
-
Copy full SHA for c6f7290 - Browse repository at this point
Copy the full SHA c6f7290View commit details -
Support interval expressions (TIBCOSoftware#1360)
Add expressions in addition to interval literals. Currently product has no way to construct a CalendarInterval value given an expression (except for string concatenation and cast from string to interval). With this change expressions like "timestamp + interval hour(time) hours" can be used. ## Changes proposed in this pull request - added new IntervalExpression implementation that generates the code required for a set of expressions each representing a long value with units specified separately - parser support for the new interval expressions as well as specifying multiple intervals in a single clause e.g. interval 4 hours 3 minutes - unit test for interval expressions and literals - split out some independent classes into separate files from ParamLiteral.scala - add "hadoop." and "javax.jdo." prefixes to be passed through as system properties
Sumedh Wale authoredJul 20, 2019 Configuration menu - View commit details
-
Copy full SHA for d45f30e - Browse repository at this point
Copy the full SHA d45f30eView commit details
Commits on Jul 22, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 33e178f - Browse repository at this point
Copy the full SHA 33e178fView commit details -
Configuration menu - View commit details
-
Copy full SHA for 8616e41 - Browse repository at this point
Copy the full SHA 8616e41View commit details -
Hydra test coverage (functional and concurrency) for the list of DML …
…statements that get used in testing the Spark SQL driver (TIBCOSoftware#1357) - Added hydra test coverage (functional and concurrency) for the list of DML statements that get used in testing the Spark SQL driver - Concurrency test configuration to run the queries concurrently for 20 minutes using 50 threads - Added an escape sequence for the special character strings
Configuration menu - View commit details
-
Copy full SHA for d22fbc1 - Browse repository at this point
Copy the full SHA d22fbc1View commit details
Commits on Jul 23, 2019
-
Avoid error being thrown when PreparedStatement.setDouble is used for…
… comparison with decimal column (TIBCOSoftware#1367) * Change to avoid error being thrown when PreparedStatement.setDouble is used for comparison with decimal column. This fix SNAP-3082
Configuration menu - View commit details
-
Copy full SHA for ea6e2a0 - Browse repository at this point
Copy the full SHA ea6e2a0View commit details
Commits on Jul 24, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 1bcd719 - Browse repository at this point
Copy the full SHA 1bcd719View commit details -
linking recent store and spark
Vatsal Mevada committedJul 24, 2019 Configuration menu - View commit details
-
Copy full SHA for 333e798 - Browse repository at this point
Copy the full SHA 333e798View commit details -
SNAP-2636 Updating the UDF jar causes cluster to be unusable on resta…
…rt (TIBCOSoftware#1344) * Fix for SNAP-2636 (also addresses first part of SNAP 2658). * Delete the jar from servers local directory and Spark temp directory, when DROP FUNCTION is called. * The list of function jars is now maintained in global command region which gets updated with each CREATE/DROP FUNCTION call. * At executors, do not fetch function dependencies, if the function is in dropped list maintained in the global command region. * Properly skip the entries in global command region during SnappyContext.initGlobalSnappyContext(). * Added a unit testcase for the fix.
Configuration menu - View commit details
-
Copy full SHA for da3e941 - Browse repository at this point
Copy the full SHA da3e941View commit details -
Enforce unique alias in deploy command.
* Using "create" instead of "put" - solves unique alias name issue * Catching EntryExistsException explicitly; otherwise it categorises it as severity 0 and closes the connection
Configuration menu - View commit details
-
Copy full SHA for f83642b - Browse repository at this point
Copy the full SHA f83642bView commit details
Commits on Jul 25, 2019
-
Add support for RESTRICT and CASCADE - Alter table drop column[SNAP-2…
…482] * Add support for RESTRICT and CASCADE, used in alter table drop column, in the snappy parser * Added a new argument, named referentialAction, to the alterTable API.
Configuration menu - View commit details
-
Copy full SHA for d574541 - Browse repository at this point
Copy the full SHA d574541View commit details -
Vatsal Mevada committed
Jul 25, 2019 Configuration menu - View commit details
-
Copy full SHA for 8ddf75d - Browse repository at this point
Copy the full SHA 8ddf75dView commit details -
SNAP-2947: (TIBCOSoftware#1378)
- Hiding internal sink state table "SNAPPYSYS_INTERNAL____SINK_STATE_TABLE" from UI.
Configuration menu - View commit details
-
Copy full SHA for a921781 - Browse repository at this point
Copy the full SHA a921781View commit details -
Adding dunit test for SNAP-3010 (TIBCOSoftware#1377)
## Changes proposed in this pull request - Adding dunit test for SNAP-3010 - updating the Cassandra dunit test to download Cassandra distribution in `$GRADLE_USER_HOME/cassandraDist` or `$HOME/.gradle/cassandraDist` directory instead of downloading in distributions directory. The distributions directory gets cleaned up every time precheckin or clean build is run causing the download of complete Cassandra distribution every time.
vatsal mevada authoredJul 25, 2019 Configuration menu - View commit details
-
Copy full SHA for 56ef94f - Browse repository at this point
Copy the full SHA 56ef94fView commit details -
SNAP-3055: added removeTableUnsafeIfExists to drop a catalog table in…
… inconstent… (TIBCOSoftware#1339) * added removeTableUnsafeIfExists to drop a catalog table in inconstent state * adding test for DROP_CATALOG_TABLE_UNSAFE procedure * worked on review comments * review comment changes * enhancements to REMOVE_METASTORE_ENTRY * fixing test for SNAP-3055 * review changes incorporated
Configuration menu - View commit details
-
Copy full SHA for 66e8397 - Browse repository at this point
Copy the full SHA 66e8397View commit details
Commits on Jul 26, 2019
-
Configuration menu - View commit details
-
Copy full SHA for 776a944 - Browse repository at this point
Copy the full SHA 776a944View commit details -
Configuration menu - View commit details
-
Copy full SHA for ae37e04 - Browse repository at this point
Copy the full SHA ae37e04View commit details -
Added automated test for run command (TIBCOSoftware#1372)
* Added automated test for "./bin/snappy run" command
Configuration menu - View commit details
-
Copy full SHA for 2936516 - Browse repository at this point
Copy the full SHA 2936516View commit details -
Use synchronizedMap for DynamicReplacableConstant#termMap to avoid it…
… getting corrupted (TIBCOSoftware#1381)
Configuration menu - View commit details
-
Copy full SHA for 9a78ed9 - Browse repository at this point
Copy the full SHA 9a78ed9View commit details -
Code changes for SNAP-2779 and SNAP-1338: (TIBCOSoftware#1373)
* Code changes for SNAP-2779 and SNAP-1338: - Adding Redundancy column in Tables List to view count of redundant copies. - Adding Redundancy Status column in Tables List to monitor redundancy has been satisfied or broken. - Changes for maintaining Redundancy and isRedundancyImpaired details for count of redundant copies and redundancy satisfaction/broken status. - Display Redundancy as 'NA' if distribution type is REPLICATE. - Display buckets count in Red colour, if any of the buckets is offline.
Configuration menu - View commit details
-
Copy full SHA for 1d793f1 - Browse repository at this point
Copy the full SHA 1d793f1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7e1c531 - Browse repository at this point
Copy the full SHA 7e1c531View commit details
Commits on Jul 27, 2019
-
Snap 3061 (TIBCOSoftware#1352)
* changes to tackle insufficient disk space issue in transaction * fixed the test failure. Apparently in some cases, the table name present in ColumnarStore is in lower case, causing region not found exception. Fix is to upper case the table name. Not debugged why for some partitions the table name is coming in lower case
Configuration menu - View commit details
-
Copy full SHA for b8f1d29 - Browse repository at this point
Copy the full SHA b8f1d29View commit details -
Configuration menu - View commit details
-
Copy full SHA for 37c6cb0 - Browse repository at this point
Copy the full SHA 37c6cb0View commit details -
[Snap 2828] Serialize the write ops on Column Table (TIBCOSoftware#1362)
* Take region lock on bulk write ops in column table, in case of smart connector use a connection to execute procedure on a server to take the lock and release the lock using same connection when the operation is over
suranjan kumar authoredJul 27, 2019 Configuration menu - View commit details
-
Copy full SHA for f54ad3e - Browse repository at this point
Copy the full SHA f54ad3eView commit details -
Snap 3055 (TIBCOSoftware#1380)
* added removeTableUnsafeIfExists to drop a catalog table in inconstent state * adding test for DROP_CATALOG_TABLE_UNSAFE procedure * worked on review comments * review comment changes * enhancements to REMOVE_METASTORE_ENTRY * fixing test for SNAP-3055 * review changes incorporated * review changes * removing unnecessary handling of exception
Configuration menu - View commit details
-
Copy full SHA for ebc4cfe - Browse repository at this point
Copy the full SHA ebc4cfeView commit details -
Configuration menu - View commit details
-
Copy full SHA for a8b31d0 - Browse repository at this point
Copy the full SHA a8b31d0View commit details -
Fixing [SNAP-2653] (TIBCOSoftware#1368)
* Mask credentials (in case of s3 URI) in Describe extended/formatted output. * Mask credentials in case of s3 on UI for external tables. * Disallow access non-admin user to the tables in SNAPPY_HIVE_METASTORE.
Configuration menu - View commit details
-
Copy full SHA for a2cf856 - Browse repository at this point
Copy the full SHA a2cf856View commit details -
External hive support in SnappySession (TIBCOSoftware#1220)
This adds support for the two components of Spark's hive session: 1) catalog that reads from external hive meta-store using an extra hive-enabled SparkSession 2) HiveSessionState from the hive-enabled SparkSession that adds additional resolution rules and strategies for such hive managed tables 3) Parser changes to delegate to Spark Parser for Hive DDL extensions. A special format for "CREATE TABLE ... USING hive" is allowed that explicitly specifies the table to use hive provider. There are two user-level properties: - Standard "spark.sql.catalogImplementation" that will consult external hive metastore in addition to the builtin catalog when the value is set to "hive". Note that first builtin catalog is used and then the external one, so in case of name clashes, the builtin one is given preference. For writes, all tables using "hive" as the provider will use the external hive metastore while rest use builtin. - "snappydata.sql.hiveCompatibility" can be set to default/spark/full. When set to "spark" or "full" then the default behaviour of "create table ..." without any USING provider and any Hive DDL extensions will change to create a hive table instead of a row table. A lazily instantiated instance of Hive-enabled SparkSession is kept inside SnappySessionState which gets referred if the "spark.sql.catalogImplementation" is "hive" for the session. For 1), the list/get/create methods in SnappySessionCatalog have been overridden to read/write to the hive catalog after the snappy catalog if hive support is enabled on the session. For 2), wrapper Rule/Strategy classes have been added that wrap the extra rules/strategies from hive session and run them only if the property has been enabled on SnappySession. The code temporarily switches to the hive-enabled SparkSession when running hive rules/strategies some of which expect the internal sharedState/sessionState to be those of hive. Honour spark.sql.sources.default for default data source: if spark.sql.sources.default is explicitly set then use the same in SQL parser with default as 'row' like before Initial code for porting hive suite Fix for SNAP-3100: make the behaviour of "drop schema" and "drop database" as identical to drop from both builtin and external catalog since "create schema" is identical to "create database" Fixes for schema/database handling and improved help messages Improved CommandLineToolsSuite to not print failed output to screen
Configuration menu - View commit details
-
Copy full SHA for 542404c - Browse repository at this point
Copy the full SHA 542404cView commit details -
Snap 2772 (TIBCOSoftware#1376)
* Added code changes for SNAP-2772 * Added code changes for undeploying packages/jars from servers side.
Configuration menu - View commit details
-
Copy full SHA for 1491b2b - Browse repository at this point
Copy the full SHA 1491b2bView commit details -
Configuration menu - View commit details
-
Copy full SHA for ccc382a - Browse repository at this point
Copy the full SHA ccc382aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 466a8ec - Browse repository at this point
Copy the full SHA 466a8ecView commit details